Learn R Programming

ctmm (version 0.5.5)

bandwidth: Calculate the optimal bandwidth matrix of movement data

Description

This function calculates the optimal bandwidth matrix (kernel covariance) for a two-dimensional animal tracking dataset, given an autocorrelated movement model (Fleming et al, 2015). This optimal bandwidth can fully take into account all autocorrelation in the data, assuming it is captured by the movement model.

Usage

bandwidth(data,CTMM,VMM=NULL,weights=FALSE,fast=TRUE,dt=NULL,precision=1/2,PC="Markov",
  verbose=FALSE,trace=FALSE)

Arguments

data

2D timeseries telemetry data represented as a telemetry object.

CTMM

A ctmm movement model as from the output of ctmm.fit.

VMM

An optional vertical ctmm object for 3D bandwidth calculation.

weights

By default, the weights are taken to be uniform, whereas weights=TRUE will optimize the weights.

fast

Use FFT algorithms for weight optimization.

dt

Optional lag bin width for the FFT algorithm.

precision

Fraction of maximum possible digits of precision to target in weight optimization. precision=1/2 results in about 7 decimal digits of precision if the preconditioner is stable.

PC

Preconditioner to use: can be "Markov", "circulant", "IID", or "direct".

verbose

Optionally return the optimal weights, effective sample size DOF.H, and other information along with the bandwidth matrix H.

trace

Produce tracing information on the progress of weight optimization.

Value

Returns a bandwidth matrix object, which is to be the optimal covariance matrix of the individual kernels of the kernel density estimate.

Details

The weights=TRUE argument can be used to correct temporal sampling bias caused by autocorrelation. weights=TRUE will optimize n=length(data$t) weights via constrained & preconditioned conjugate gradient algorithms. These algorithms have a few options that should be considered if the data are very irregular.

fast=TRUE grids the data with grid width dt and applies FFT algorithms, for a computational cost as low as \(O(n \log n)\) with only \(O(n)\) function evaluations. If no dt is specified, the minimum sampling interval min(diff(data$t)) is used. If the data are irregular (permitting gaps), then dt may need to be several times smaller to avoid slow down. In this case, try setting trace=TRUE and decreasing dt until the interations speed up and the number of feasibility assessments becomes less than \(O(n)\). On the other hand, if the data contain some very tiny time intervals, say 1 second among hourly sampled data, then the default dt setting will create an excessively high-resolution discretization of time, which will also cause slowdown. In this case CTMM should contain an error model and dt can likely be increased to a larger fraction of the median sampling interval.

fast=FALSE uses exact times and has a computational cost as low as \(O(n^2)\), including \(O(n^2)\) function evaluations. With PC="direct" this method will produce a result that is exact to within machine precision, but with a computational cost of \(O(n^3)\).

References

T. F. Chan. An Optimal Circulant Preconditioner for Toeplitz Systems. SIAM Journal on Scientific and Statistical Computing, 9:4, 766-771 (1988).

D. Marcotte. Fast variogram computation with FFT. Computers and Geosciences 22:10, 1175-1186 (1996).

C. H. Fleming, W. F. Fagan, T. Mueller, K. A. Olson, P. Leimgruber, J. M. Calabrese. Rigorous home-range estimation with movement data: A new autocorrelated kernel-density estimator. Ecology, 96:5, 1182-1188 (2015).

C. H. Fleming, D. Sheldon, W. F. Fagan, P. Leimgruber, T. Mueller, D. Nandintsetseg, M. J. Noonan, K. A. Olson, E. Setyawan, A. Sianipar, J. M. Calabrese. Correcting for missing and irregular data in home-range estimation. Ecological Applications, 28:4, 1003-1010 (2018).

See Also

akde, ctmm.fit