grm
, ltm
, rasch
and tpm
models.factor.scores(object, ...)
## S3 method for class 'grm':
factor.scores(object, resp.patterns = NULL,
method = c("EB", "EAP", "MI"), B = 5, ...)
## S3 method for class 'ltm':
factor.scores(object, resp.patterns = NULL,
method = c("EB", "EAP", "MI", "Component"), B = 5,
robust.se = FALSE, ...)
## S3 method for class 'rasch':
factor.scores(object, resp.patterns = NULL,
method = c("EB", "EAP", "MI"), B = 5, robust.se = FALSE, ...)
## S3 method for class 'tpm':
factor.scores(object, resp.patterns = NULL,
method = c("EB", "EAP", "MI"), B = 5, ...)
grm
, class ltm
, class rasch
or class
tpm
.NULL
the factor scores are computed for the observed response patterns.method = "MI"
.TRUE
the sandwich estimator is used for the estimation of the covariance
matrix of the MLEs. See Details section for more info.fscores
is a list with components,data.frame
of observed response patterns including, observed and expected
frequencies (only if the observed data response matrix contains no missing vales), the factor scores
and their standard errors.method = "MI"
.object
.TRUE
if resp.patterns
argument has been specified.coef(object)
; this is NULL
when object
inherits from class grm
.method = "EB"
) and their associated variance are good measures of the
posterior distribution while $p \rightarrow \infty$, where $p$ is the number of items.
This is based on the result $$p(z|x)=p(z|x; \hat{\theta})(1+O(1/p)),$$
where $\hat{\theta}$ are the MLEs. However, in cases where $p$ and/or $n$ (the sample size) is small
we ignore the variability of plugging-in estimates but not the true parameter values. A solution to this
problem can be given using Multiple Imputation (MI; use method = "MI"
). In particular, MI is used the
other way around, i.e.,
[object Object],[object Object],[object Object]
This scheme explicitly acknowledges the ignorance of the true parameter values by drawing from their large sample
posterior distribution while taking into account the sampling error. The modes of the posterior distribution
$p(z|x; \theta)$ are numerically approximated using the BFGS algorithm in optim()
.
The Expected a posteriori scores (use method = "EAP"
) computed by factor.scores()
are defined as
follows: $$\int z p(z | x; \hat{\theta}) dz.$$
The Component scores (use method = "Component"
) proposed by Bartholomew (1984) is an alternative method
to scale the sample units in the latent dimensions identified by the model that avoids the calculation of the
posterior mode. However, this method is not valid in the general case where nonlinear latent terms are assumed.plot.fscores
,
grm
,
ltm
,
rasch
,
tpm
## Factor Scores for the Rasch model
fit <- rasch(LSAT)
factor.scores(fit) # Empirical Bayes
## Factor scores for specific patterns,
## including NA's, can be obtained by
factor.scores(fit, resp.patterns = rbind(c(1,0,1,0,1), c(NA,1,0,NA,1)))
## Factor Scores for the two-parameter logistic model
fit <- ltm(Abortion ~ z1)
factor.scores(fit, method = "MI", B = 20) # Multiple Imputation
## Factor Scores for the graded response model
fit <- grm(Science[c(1,3,4,7)])
factor.scores(fit, resp.patterns = rbind(1:4, c(NA,1,2,3)))
Run the code above in your browser using DataLab