tolerance(rs, test = "upper", method = "ASY",
pooled = NULL, symchoice = NULL, hpsim = NULL,
h0sim = NULL, reduce = 1, ITER = 1000,
exactL2 = TRUE, comment = TRUE)"rrs" resulting from a call to risk, giving the fixed or adaptive kernel-smoothed risk function."upper" (default - performs upper tailed tests examining heighted risk `hotspots'), "lower" (lower tailed tests examining `troughs') or method = "ASY", ignored otherwise. An object of class "bivden" resulting from a call to bivariate.density (or the component pooled from rs<method = "MC"; ignored otherwise (asymptotic version still in development). A character string denoting the density to use as the common pilot estimate for the symmetric adaptive estimator (Davies et al. 2015).NULL (default), in which case the pilot bandwidths used at each iteration are the same values used for the original density estimates used in creahpsim, but for the global bandwidths used at each iteration when computing Monte-Carlo tolerance contours for adaptive estimates in rs. Not used if rs is a fixed-bandwidth estimate.method = "MC" (defaulting to 1000). Non-integer numeric values are rounded. Ignored when method = "ASY".rs (and pooled) are fixed-bandwidth density estimates, or if method = "MC". A boolean value indicating whether or not to separately calculate the `L2' integral components for adaptive tolerance contours. ATRUE.ceiling(reduce*res) giving the evaluation locations on the x axis (where res is the grid resolution as specified in the calls to bivariate.density for calculation of the densities for rs and pooled)ceiling(reduce*res)*ceiling(reduce*res) matrix giving the values of the risk surface over the evaluation grid. Values corresponding to grid coordinates outside the study region are assigned NA. If method = "MC", this will be a single value of NAceiling(reduce*res)*ceiling(reduce*res) matrix giving the p-values corresponding to the evaluation grid in light of the elected testreduce provides an option to reduce this computation time by decreasing the resolution of the evaluation grid. However, the accuracy and appearance of the resulting tolerance contours can be severely degraded if reduce is assigned too small a value. Care must therefore be taken and consideration given to the resolution of the original evaluation grid when altering reduce from its default value. For most practical purposes, we have found a value of reduce resulting in evaluation of a p-value surface of size 50 by 50 is adequate.The MC approach is currently the only way to obtain tolerance contours for the adaptive-symmetric estimator. Given the computational expense, especially for adaptive estimates, it's recommended you do some preliminary runs with a small ITER value and/or make use of reduce to get acceptable running times.
Choosing different options for the argument test simply manipulates the `direction' of the p-values. That is, plotting tolerance contours at a significance level of 0.05 for a p-value surface calculated with test = "double" is equivalent to plotting tolerance contours at significance levels of 0.025 and 0.975 for test = "upper".
Implementation of the Monte-Carlo contours for the fixed-bandwidth estimator simply involves random allocation of case/control marks and re-estimation of the risk surface ITER times, against which the original estimate is compared. The bandwidth(s) for case and control densities in the permuted estimates are controlled by hpsim. If your risk surface is adaptive, hpsim is used to control the pilot bandwidths, h0sim the global bandwidths. In particular, for the adaptive symmetric estimator (Davies et al. 2015), it is assumed that the original estimate was itself calculated as a symmetric estimate via use of the pdef argument. The symchoice argument governs the specific permuted data set to use for the pilot estimate, and if hpsim is NULL, the pilot bandwidth thereof is taken from the stored pdef object in the original estimate. An error will occur if you attempt to set symchoice with an rs argument in this function that does not contain density estimates with present pdef objects of class "bivden". See the help file for bivariate.density for details on using the pdef argument.
In addition to the usage noted above, you may define either hpsim and/or h0sim as functions which re-calculate the case and control pilot (or fixed) bandwidth(s) and the global bandwidth(s) at each iteration, based on the data set of the permuted case/control marks. If so, these must strictly be functions that take the case data as the first argument and the control data as the second argument, each as a two-column matrix of the x-y coordinates. The function must strictly return a numeric vector of length 1 or 2; these entries to be assigned to the relevant density estimates as per the usage notes on supply of a numeric vector for hpsim. Take care -- warnings will be issued if, for example, you specify a hpsim function that returns two numeric values, but your rs object is a symmetric-adaptive estimate (in which case it only makes sense to yield one pilot bandwidth)!
data(chorley)
ch.h <- LSCV.density(chorley, hlim = c(0.1, 2))
ch.pool <- bivariate.density(data = chorley, pilotH = ch.h,
adaptive = FALSE)
ch.case <- bivariate.density(data = chorley, ID = "larynx", pilotH = ch.h,
adaptive = FALSE)
ch.con <- bivariate.density(data = chorley, ID = "lung", pilotH = ch.h,
adaptive = FALSE)
##Compute log-risk surface and asymptotic p-value surfaces
ch.rrs <- risk(f = ch.case, g = ch.con)
ch.tol <- tolerance(rs = ch.rrs, pooled = ch.pool)
contour(ch.tol$X, ch.tol$Y, ch.tol$P, levels = 0.05, add = TRUE)
data(PBC)
PBC.casedata <- split(PBC)[[1]]
PBC.controldata <- split(PBC)[[2]]
pbc.rrs.rawdata <- risk(f = PBC.casedata, g = PBC.controldata,
adaptive = TRUE, tolerate = TRUE)
plot(pbc.rrs.rawdata, display = "3d", aspect = 1:2, col = heat.colors(12)[12:1],
tolerance.matrix = pbc.rrs.rawdata$P, tol.opt = list(col = "white", raise = 0.03))Run the code above in your browser using DataLab