model) to present their raw fit statistics.Model comparisons are made by subtracting the fit of the comparison model from the fit of a base model. To make sure that the differences between models are positive and yield p-values for likelihood ratio tests, the model or models listed in the base argument should be more saturated (i.e., more estimated parameters and fewer degrees of freedom) than models listed in the comparison argument. If a comparison is made where the comparison model has a higher minus 2 log likelihood (-2LL) than the base model, then the difference in their -2LLs will be negative. P-values for likelihood ratio tests will not be reported when either the -2LL or degrees of freedom for the comparison are negative.
When multiple models are included in both the base and comparison arguments, then comparisons are made between the two lists of models based on the value of the all argument. If all is set to FALSE (default), then the first model in the base list is compared to the first model in the comparison list, second with second, and so on. If there are an unequal number of base and comparison models, then the shorter list of models is repeated to match the length of the longer list. For example, comparing base models B1 and B2 with comparison models C1, C2 and C3 will yield three comparisons: B1 with C1, B2 with C2, and B1 with C3. Each of those comparisons are prefaced by a comparison between the base model and a missing comparison model to present the fit of the base model.
If all is set to TRUE, all possible comparisons between base and comparison models are made, and one entry is made for each base model. All comparisons involving the first model in base are made first, followed by all comparisons with the second base model, and so on. When there are multiple models in either the base or comparison arguments but not both, then the all argument does not affect the set of comparisons made.
The following columns appear in the output:
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
The mxCompare function will give a p-value for any comparison in which both diffLL and diffdf are non-negative. However, this p-value is based on the assumptions of the likelihood ratio test, specifically that the two models being compared are nested. The likelihood ratio test and associated p-values are not valid when the comparison model is not nested in the referenced base model.
Use options('digits' = N) to set the minimum number of significant digits to be printed in values. The mxCompare function does not directly accept a digits argument, and depends on the value of the 'digits' option.