Last chance! 50% off unlimited learning
Sale ends in
derived.nj
. The function derived
also computes
Horvitz-Thompson-like estimates, but it assumes a Poisson or binomial
distribution of total number when computing the sampling variance.derived.nj ( nj, esa, se.esa, method = c("SRS", "local", "poisson",
"binomial"), xy = NULL, alpha = 0.05, loginterval = TRUE, area =
NULL )
derived.mash ( object, sessnum = NULL, method = c("SRS", "local"),
alpha = 0.05, loginterval = TRUE)
derived.cluster ( object, sessnum = NULL, method = c("SRS", "local"),
alpha = 0.05, loginterval = TRUE)
derived.session ( object, method = c("SRS", "local"), xy = NULL,
alpha = 0.05, loginterval = TRUE )
derived.external ( object, sessnum = NULL, nj, cluster, buffer = 100,
mask = NULL, noccasions = NULL, method = c("SRS", "local"), xy = NULL,
alpha = 0.05, loginterval = TRUE)
method = "local"
only)mask
provided)nj
)derived.nj
accepts a vector of counts (nj
), along with
$\hat{a}$ and $\widehat{SE}(\hat{a})$. The
argument esa
may include both $\hat{a}$ and
$\widehat{SE}(\hat{a})$) - any form will do if it can
be coerced to a vector of length 2. In the special case that nj
is of length 1, or method
takes the values `poisson' or
`binomial', the variance is computed using a theoretical variance
rather than an empirical estimate. The value of method
corresponds to `distribution' in derived
, and defaults to
`poisson'. For method = 'binomial'
you must specify area
(see Examples).
derived.cluster
accepts a model fitted to data from clustered
detectors; each cluster is interpreted as a replicate
sample. It is assumed that the sets of individuals sampled by
different clusters do not intersect, and that all clusters have the
same geometry (spacing, detector number etc.).
derived.mash
accepts a model fitted to clustered data that have
been `mashed' for fast processing (see mash
); each
cluster is a replicate sample: the function uses the vector of cluster
frequencies ($n_j$) stored as an attribute of the mashed
capthist
by mash
.
derived.external
combines detection parameter estimates from a
fitted model with a vector of frequencies nj
from replicate
sampling units configured as in cluster
. Detectors in
cluster
are assumed to match those in the fitted model with
respect to type and efficiency, but sampling duration
(noccasions
), spacing etc. may differ. The mask
should
match cluster
; if mask
is missing, one will be
constructed using the buffer
argument and defaults from
make.mask
.
derived.session
accepts a single fitted model that must span
multiple sessions; each session is interpreted as a replicate sample.
Spatial variance may be calculated assuming simple random sampling
(method = "SRS"
) or using the neighbourhood variance estimator
recommended by Stevens and Olsen (2003) for generalized random
tessellation stratified (GRTS) samples and implemented in package
method = "local"
). For `local' variance
estimates, the centre of each replicate must be provided in xy
,
except where centres may be inferred from the data. The options
method = "poisson"
and method = "binomial"
use
theoretical (model-based) variance rather than the empirical spatial
variance.derived
, esa
## The `ovensong' data are pooled from 75 replicate positions of a
## 4-microphone array. The array positions are coded as the first 4
## digits of each sound identifier. The sound data are initially in the
## object `signalCH'. We first impose a 52.5 dB signal threshold as in
## Dawson & Efford (2009, J. Appl. Ecol. 46:1201--1209). The vector nj
## includes 33 positions at which no ovenbird was heard. The first and
## second columns of `temp' hold the estimated effective sampling area
## and its standard error.
signalCH.525 <- subset(signalCH, cutval = 52.5)
nonzero.counts <- table(substring(rownames(signalCH.525),1,4))
nj <- c(nonzero.counts, rep(0, 75 - length(nonzero.counts)))
temp <- derived(ovensong.model.1, se.esa = TRUE)
derived.nj(nj, temp["esa",1:2])
## The result is very close to that reported by Dawson & Efford
## from a 2-D Poisson model fitted by maximizing the full likelihood.
## If nj vector has length 1, a theoretical variance is used...
msk <- ovensong.model.1$mask
A <- nrow(msk) * attr(msk, "area")
derived.nj (sum(nj), temp["esa",1:2], method = "poisson")
derived.nj (sum(nj), temp["esa",1:2], method = "binomial", area = A)
## Set up an array of small (4 x 4) grids,
## simulate a Poisson-distributed population,
## sample from it, plot, and fit a model.
## mash() condenses clusters to a single cluster
testregion <- data.frame(x = c(0,2000,2000,0),
y = c(0,0,2000,2000))
t4 <- make.grid(nx = 4, ny = 4, spacing = 40)
t4.16 <- make.systematic (n = 16, cluster = t4,
region = testregion)
popn1 <- sim.popn (D = 5, core = testregion,
buffer = 0)
capt1 <- sim.capthist(t4.16, popn = popn1)
fit1 <- secr.fit(mash(capt1), CL = TRUE, trace = FALSE)
## Visualize sampling
tempmask <- make.mask(t4.16, spacing = 10, type =
"clusterbuffer")
plot(tempmask)
plot(t4.16, add = TRUE)
plot(capt1, add = TRUE)
## Compare model-based and empirical variances.
## Here the answers are similar because the data
## were simulated from a Poisson distribution,
## as assumed by \code{derived}
derived(fit1)
derived.mash(fit1)
## Now simulate a patchy distribution; note the
## larger (and more credible) SE from derived.mash().
popn2 <- sim.popn (D = 5, core = testregion, buffer = 0,
model2D = "hills", details = list(hills = c(-2,3)))
capt2 <- sim.capthist(t4.16, popn = popn2)
fit2 <- secr.fit(mash(capt2), CL = TRUE, trace = FALSE)
derived(fit2)
derived.mash(fit2)
## The detection model we have fitted may be extrapolated to
## a more fine-grained systematic sample of points, with
## detectors operated on a single occasion at each...
## Total effort 400 x 1 = 400 detector-occasions, compared
## to 256 x 5 = 1280 detector-occasions for initial survey.
t1 <- make.grid(nx = 1, ny = 1)
t1.100 <- make.systematic (cluster = t1, spacing = 100,
region = testregion)
capt2a <- sim.capthist(t1.100, popn = popn2, noccasions = 1)
## one way to get number of animals per point
nj <- attr(mash(capt2a), "n.mash")
derived.external (fit2, nj = nj, cluster = t1, buffer = 100,
noccasions = 1)
## Review plots
library(MASS)
base.plot <- function() {
eqscplot( testregion, axes = FALSE, xlab = "",
ylab = "", type = "n")
polygon(testregion)
}
par(mfrow = c(1,3), xpd = TRUE, xaxs = "i", yaxs = "i")
base.plot()
plot(popn2, add = TRUE, col = "blue")
mtext(side=3, line=0.5, "Population", cex=0.8, col="black")
base.plot()
plot (capt2a, add = TRUE,title = "Extensive survey")
base.plot()
plot(capt2, add = TRUE, title = "Intensive survey")
par(mfrow = c(1,1), xpd = FALSE, xaxs = "r", yaxs = "r") ## defaults
Run the code above in your browser using DataLab