Learn R Programming

openCR (version 2.2.6)

AIC.openCR: Compare openCR Models

Description

Terse report on the fit of one or more spatially explicit capture--recapture models. Models with smaller values of AIC (Akaike's Information Criterion) are preferred.

Usage

# S3 method for openCR
AIC(object, ..., sort = TRUE, k = 2, dmax = 10,  use.rank = FALSE,
                        svtol = 1e-5, criterion = c('AIC','AICc'), n = NULL)

# S3 method for openCRlist AIC(object, ..., sort = TRUE, k = 2, dmax = 10, use.rank = FALSE, svtol = 1e-5, criterion = c('AIC','AICc'), n = NULL)

# S3 method for openCR logLik(object, ...)

Value

A data frame with one row per model. By default, rows are sorted by ascending AIC.

model

character string describing the fitted model

npar

number of parameters estimated

rank

rank of Hessian

logLik

maximized log likelihood

AIC

Akaike's Information Criterion

AICc

AIC with small-sample adjustment of Hurvich & Tsai (1989)

dAICc

difference between AICc of this model and the one with smallest AIC

AICwt

AICc model weight

logLik.openCR returns an object of class `logLik' that has attribute df (degrees of freedom = number of estimated parameters).

Arguments

object

openCR object output from the function openCR.fit, or openCRlist

...

other openCR objects

sort

logical for whether rows should be sorted by ascending AICc

k

numeric, the penalty per parameter to be used; always k = 2 in this method

dmax

numeric, the maximum AIC difference for inclusion in confidence set

use.rank

logical; if TRUE the number of parameters is based on the rank of the Hessian matrix

svtol

minimum singular value (eigenvalue) of Hessian used when counting non-redundant parameters

criterion

character, criterion to use for model comparison and weights

n

integer effective sample size

Details

Models to be compared must have been fitted to the same data and use the same likelihood method (full vs conditional).

AIC with small sample adjustment is given by

$$ \mbox{AIC}_c = -2\log(L(\hat{\theta})) + 2K + \frac{2K(K+1)}{n-K-1} $$

where \(K\) is the number of ``beta" parameters estimated. By default, the effective sample size \(n\) is the number of individuals observed at least once (i.e. the number of rows in capthist). This differs from the default in MARK which for CJS models is the sum of the sizes of release cohorts (see m.array).

Model weights are calculated as $$w_i = \frac{\exp(-\Delta_i / 2)}{ \sum{\exp(-\Delta_i / 2)}}$$

Models for which dAIC > dmax are given a weight of zero and are excluded from the summation. Model weights may be used to form model-averaged estimates of real or beta parameters with modelAverage (see also Buckland et al. 1997, Burnham and Anderson 2002).

The argument k is included for consistency with the generic method AIC.

References

Buckland S. T., Burnham K. P. and Augustin, N. H. (1997) Model selection: an integral part of inference. Biometrics 53, 603--618.

Burnham, K. P. and Anderson, D. R. (2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach. Second edition. New York: Springer-Verlag.

Hurvich, C. M. and Tsai, C. L. (1989) Regression and time series model selection in small samples. Biometrika 76, 297--307.

See Also

AIC, openCR.fit, print.openCR, LR.test

Examples

Run this code

if (FALSE) {
m1 <- openCR.fit(ovenCH, type = 'JSSAf')
m2 <- openCR.fit(ovenCH, type = 'JSSAf', model = list(p~session))
AIC(m1, m2)
}

Run the code above in your browser using DataLab