Learn R Programming

SSN (version 1.1.7)

InfoCritCompare: Compare glmssn Information Criteria

Description

InfoCritCompare displays important model criteria for each object of class glmssn object in the model list.

Usage

InfoCritCompare(model.list)

Arguments

model.list
a list of fitted glmssn-class model objects in the form list(model1, model2, ...)

Value

InfoCritCompare returns a data.frame of the model criteria for each specified glmssn-class object. These are useful for comparing and selecting models. The columns in the data.frame are described below. In the description below 'obs' is an observed data value, 'pred' is its prediction using cross-validation, and 'predSE' is the prediction standard error using cross-validation.
formula
model formula
EstMethod
estimation method, either maximum likelihood (ML) or restricted maximum likelihood (REML)
Variance_Components
names of the variance components, including the autocovariance model names, the nugget effect, and the random effects.
neg2Log
-2 log-likelihood. Note that the neg2LogL is only returned if the Gaussian distribution (default) was specified when creating the glmssn object.
AIC
Akaike Information Criteria (AIC). Note that AIC is only returned if the Gaussian distribution (default) was specified when creating the glmssn object.
bias
bias, computed as mean(obs - pred).
std.bias
standardized bias, computed as mean((obs - pred)/predSE).
RMSPE
root mean-squared prediction error, computed as sqrt(mean((obs - pred)^2))
RAV
root average variance, computed as sqrt(mean(predSE^2)). If the prediction standard errors are being estimated well, this should be close to RMSPE.
std.MSPE
standardized mean-squared prediction error, computed as mean(((obs - pred)/predSE)^2). If the prediction standard errors are being estimated well, this should be close to 1.
cov.80
the proportion of times that the observed value was within the prediction interval formed from pred +- qt(.9, df)*predSE, where qt is the quantile t function, and df is the number of degrees of freedom. If there is little bias and the prediction standard errors are being estimated well, this should be close to 0.8 for large sample sizes.
cov.90
the proportion of times that observed value was within the prediction interval formed from pred +- qt(.95, df)*predSE, where qt is the quantile t function, and df is the number of degrees of freedom. If there is little bias and the prediction standard errors are being estimated well, this should be close to 0.9 for large sample sizes.
cov.95
the proportion of times that the observed value was within the prediction interval formed from pred +- qt(.975, df)*predSE, where qt is the quantile t function, and df is the number of degrees of freedom. If there is little bias and the prediction standard errors are being estimated well, this should be close to 0.95 for large sample sizes.

Details

InfoCritCompare displays important model criteria that can be used to compare and select spatial statistical models. For instance, spatial models can be compared with non-spatial models, other spatial models, or both.

See Also

glmssn, summary.glmssn, AIC, CrossValidationStatsSSN

Examples

Run this code

	library(SSN)
	data(modelFits)

  compare.models <- InfoCritCompare(list(fitNS, fitRE, fitSp, fitSpRE1, fitSpRE2))
  
  # Examine the model criteria
  compare.models

  # Compare the AIC values for all models with random effects
  compare.models[c(2,4,5),c("Variance_Components","AIC")]
  
  # Compare the RMSPE for the spatial models
  compare.models[c(3,4,5),c("Variance_Components","RMSPE")]
  
  # Compare the RMSPE between spatial and non-spatial models
  compare.models[c(1,3),c("formula","Variance_Components", "RMSPE")]

Run the code above in your browser using DataLab