Learn R Programming

mirt (version 0.2.6)

polymirt: Full-Information Item Factor Analysis for Mixed Data Formats

Description

polymirt fits an unconditional (exploratory) full-information maximum-likelihood factor analysis model to dichotomous and polychotomous data under the item response theory paradigm using Cai's (2010) Metropolis-Hastings Robbins-Monro algorithm. If requested, lower asymptote parameters are estimated with a beta prior included automatically.

Usage

polymirt(data, nfact, guess = 0, upper = 1, estGuess =
    NULL, estUpper = NULL, prev.cor = NULL, rotate =
    'varimax', Target = NULL, verbose = TRUE, calcLL =
    TRUE, draws = 2000, debug = FALSE, technical = list(),
    ...)

## S3 method for class 'polymirt': summary(object, rotate='', suppress = 0, digits = 3, print = FALSE, ...)

## S3 method for class 'polymirt': coef(object, rotate = '', SE = TRUE, digits = 3, ...)

## S3 method for class 'polymirt': plot(x, npts = 50, type = 'info', rot = list(x = -70, y = 30, z = 10), ...)

## S3 method for class 'polymirt': residuals(object, restype = 'LD', digits = 3, printvalue = NULL, ...)

## S3 method for class 'polymirt': anova(object, object2, ...)

## S3 method for class 'polymirt': fitted(object, digits = 3, ...)

Arguments

data
a matrix or data.frame that consists of numerically ordered data
nfact
number of factors to be extracted
guess
starting (or fixed) values for the pseudo-guessing parameter. Can be entered as a single value to assign a global guessing parameter or may be entered as a numeric vector for each item
upper
initial (or fixed) upper bound parameters for 4-PL model. Can be entered as a single value to assign a global upper bound parameter or may be entered as a numeric vector corresponding to each item
estGuess
a logical vector indicating which lower-asymptote parameters to be estimated (default is null, and therefore is contingent on the values in guess). By default, if any value in guess is greater than 0 then its respective <
estUpper
same function as estGuess, but for upper bound parameters
prev.cor
use a previously computed correlation matrix to be used to estimate starting values the estimation. The input could be any correlation matrix, but it is advised to use a matrix of polychoric correlations.
rotate
type of rotation to perform after the initial orthogonal parameters have been extracted by using summary; default is 'varimax'. See mirt for list of possible rotations. If
SE
logical; display the standard errors?
printvalue
a numeric value to be specified when using the res='exp' option. Only prints patterns that have standardized residuals greater than abs(printvalue). The default (NULL) prints all response patterns
print
logical; print output to console?
x
an object of class polymirtClass to be plotted or printed
object
a model estimated from polymirtClass of class polymirt
object2
a model estimated from polymirt of class polymirtClass
suppress
a numeric value indicating which (possibly rotated) factor loadings should be suppressed. Typical values are around .3 in most statistical software
digits
the number of significant digits to be rounded
npts
number of quadrature points to be used for plotting features. Larger values make plots look smoother
rot
allows rotation of the 3D graphics
verbose
logical; display iteration history during estimation?
calcLL
logical; calculate the log-likelihood?
restype
type of residuals to be displayed. Can be either 'LD' for a local dependence matrix (Chen & Thissen, 1997) or 'exp' for the expected values for the frequencies of every response pattern
draws
the number of Monte Carlo draws to estimate the log-likelihood
type
either 'info' or 'infocontour' to plot test information plots
Target
a dummy variable matrix indicing a target rotation pattern
debug
logical; turn on debugging features?
technical
list specifying subtle parameters that can be adjusted. Can be [object Object],[object Object],[object Object],[object Object],[object Object]
...
additional arguments to be passed to the confmirt estimation engine

Details

polymirt follows the item factor analysis strategy by a stochastic version of maximum likelihood estimation described by Cai (2010). The general equation used for multidimensional item response theory in this package is in the logistic form with a scaling correction of 1.702. This correction is applied to allow comparison to mainstream programs such as TESTFACT (2003) and POLYFACT. Missing data are treated as 'missing at random' so that each response vector is included in the estimation (i.e., full-information). Residuals are computed using the LD statistic (Chen & Thissen, 1997) in the lower diagonal of the matrix returned by residuals, and Cramer's V above the diagonal. For computing the log-likelihood more accurately see logLik.

Use of plot will display the test information function for 1 and 2 dimensional solutions. To examine individuals item plots use itemplot (although the plink package is much more suitable for IRT graphics) which will also plot information and surface functions.

coef displays the item parameters with their associated standard errors, while use of summary transforms the slopes into a factor loadings metric. Also, factor loading values below a specified constant can be also be suppressed in summary to allow better visual clarity. Models may be compared by using the anova function, where a Chi-squared difference test and AIC difference values are displayed.

References

Cai, L. (2010). High-Dimensional exploratory item factor analysis by a Metropolis-Hastings Robbins-Monro algorithm. Psychometrika, 75, 33-57.

Chalmers, R., P. (2012). mirt: A Multidimensional Item Response Theory Package for the R Environment. Journal of Statistical Software, 48(6), 1-29.

Wood, R., Wilson, D. T., Gibbons, R. D., Schilling, S. G., Muraki, E., & Bock, R. D. (2003). TESTFACT 4 for Windows: Test Scoring, Item Statistics, and Full-information Item Factor Analysis [Computer software]. Lincolnwood, IL: Scientific Software International.

See Also

expand.table, key2binary, polymirt, itemplot

Examples

Run this code
#load LSAT section 7 data and compute 1 and 2 factor models
data(LSAT7)
fulldata <- expand.table(LSAT7)

(mod1 <- polymirt(fulldata, 1))
summary(mod1)
residuals(mod1)

(mod2 <- polymirt(fulldata, 2))
summary(mod2)
coef(mod2)
anova(mod1,mod2)

###########
#data from the 'ltm' package in numeric format
data(Science)
(mod1 <- polymirt(Science, 1))
summary(mod1)
residuals(mod1)
coef(mod1)

(mod2 <- polymirt(Science, 2, calcLL = FALSE)) #don't calculate log-likelihood
mod2 <- logLik(mod2,5000) #calc log-likelihood here with more draws
summary(mod2, 'promax', suppress = .3)
coef(mod2)
anova(mod1,mod2)

Run the code above in your browser using DataLab