dclf.test(X, ..., alternative=c("two.sided", "less", "greater"),
rinterval = NULL, use.theo=FALSE)mad.test(X, ..., alternative=c("two.sided", "less", "greater"),
rinterval = NULL, use.theo=FALSE)
"ppp"
, "lpp"
or other class), a fitted point process model (object of class "ppm"
,
"kppm"
or other class), a simulation envelope (obenvelope
.
Useful arguments include fun
to determine the summary
function, nsim
to specify the number of Monte Carlo
simulations, verbose=FALS
r
over which the maximum absolute deviation, or the integral,
will be computed for the test. A numeric vector of length 2.use.theo=TRUE
)
or to the sample mean of simulations from CSR (use.theo=FALSE
)."htest"
.
Printing this object gives a report on the result of the test.
The $p$-value is contained in the component p.value
. dclf.test
performs the test advocated by Loosmore and Ford (2006)
which is also described in Diggle (1986), Cressie (1991, page 667, equation
(8.5.42)) and Diggle (2003, page 14). See Baddeley et al (2014).
mad.test
performs the X
.
X
is some kind of point pattern, then a test of Complete
Spatial Randomness (CSR) will be performed. That is,
the null hypothesis is that the point pattern is completely random.X
is a fitted point process model, then a test of
goodness-of-fit for the fitted model will be performed. The model object
contains the data point pattern to which it was originally fitted.
The null hypothesis is that the data point pattern is a realisation
of the model.X
is an envelope object generated byenvelope
,
then it should have been generated withsavefuns=TRUE
orsavepatterns=TRUE
so that it contains simulation results.
These simulations will be treated as realisations from the null
hypothesis.X
could be a previously-performed
test of the same kind (i.e. the result of callingdclf.test
ormad.test
).
The simulations used to perform the original test
will be re-used to perform the new test (provided these simulations
were saved in the original test, by settingsavefuns=TRUE
orsavepatterns=TRUE
). The argument alternative
specifies the alternative hypothesis,
that is, the direction of deviation that will be considered
statistically significant. If alternative="two.sided"
(the
default), both positive and negative deviations (between
the observed summary function and the theoretical function)
are significant. If alternative="less"
, then only negative
deviations (where the observed summary function is lower than the
theoretical function) are considered. If alternative="greater"
,
then only positive deviations (where the observed summary function is
higher than the theoretical function) are considered.
In all cases, the algorithm will first call envelope
to
generate or extract the simulated summary functions.
The number of simulations that will be generated or extracted,
is determined by the argument nsim
, and defaults to 99.
The summary function that will be computed is determined by the
argument fun
(or the first unnamed argument in the list
...
) and defaults to Kest
(except when
X
is an envelope object generated with savefuns=TRUE
,
when these functions will be taken).
The choice of summary function fun
affects the power of the
test. It is normally recommended to apply a variance-stabilising
transformation (Ripley, 1981). If you are using the $K$ function,
the normal practice is to replace this by the $L$ function
(Besag, 1977) computed by Lest
. If you are using
the $F$ or $G$ functions, the recommended practice is to apply
Fisher's variance-stabilising transformation
$\sin^{-1}\sqrt x$ using the argument
transform
. See the Examples.
The argument rinterval
specifies the interval of
distance values $r$ which will contribute to the
test statistic (either maximising over this range of values
for mad.test
, or integrating over this range of values
for dclf.test
). This affects the power of the test.
General advice and experiments in Baddeley et al (2014) suggest
that the maximum $r$ value should be slightly larger than
the maximum possible range of interaction between points. The
dclf.test
is quite sensitive to this choice, while the
mad.test
is relatively insensitive.
It is also possible to specify a pointwise test (i.e. taking
a single, fixed value of distance $r$) by specifing
rinterval = c(r,r)
.
Diggle, P. J. (1986). Displaced amacrine cells in the retina of a rabbit : analysis of a bivariate spatial point pattern. J. Neuroscience Methods 18, 115--125. Diggle, P.J. (2003) Statistical analysis of spatial point patterns, Second edition. Arnold.
Loosmore, N.B. and Ford, E.D. (2006) Statistical inference using the G or K point pattern spatial statistics. Ecology 87, 1925--1931.
Ripley, B.D. (1977) Modelling spatial patterns (with discussion). Journal of the Royal Statistical Society, Series B, 39, 172 -- 212.
Ripley, B.D. (1981) Spatial statistics. John Wiley and Sons.
envelope
,
dclf.progress
dclf.test(cells, Lest, nsim=39)
m <- mad.test(cells, Lest, verbose=FALSE, rinterval=c(0, 0.1), nsim=19)
m
# extract the p-value
m$p.value
# variance stabilised G function
dclf.test(cells, Gest, transform=expression(asin(sqrt(.))),
verbose=FALSE, nsim=19)
## one-sided test
ml <- mad.test(cells, Lest, verbose=FALSE, nsim=19, alternative="less")
Run the code above in your browser using DataLab