Last chance! 50% off unlimited learning
Sale ends in
Methods for computation of the
# S4 method for PValue
pvalue(object, q, …)
# S4 method for NullDistribution
pvalue(object, q, …)
# S4 method for ApproxNullDistribution
pvalue(object, q, …)
# S4 method for IndependenceTest
pvalue(object, …)
# S4 method for MaxTypeIndependenceTest
pvalue(object, method = c("global", "single-step",
"step-down", "unadjusted"),
distribution = c("joint", "marginal"),
type = c("Bonferroni", "Sidak"), …)# S4 method for NullDistribution
midpvalue(object, q, …)
# S4 method for ApproxNullDistribution
midpvalue(object, q, …)
# S4 method for IndependenceTest
midpvalue(object, …)
# S4 method for NullDistribution
pvalue_interval(object, q, …)
# S4 method for IndependenceTest
pvalue_interval(object, …)
# S4 method for NullDistribution
size(object, alpha, type = c("p-value", "mid-p-value"), …)
# S4 method for IndependenceTest
size(object, alpha, type = c("p-value", "mid-p-value"), …)
an object from which the
a numeric, the quantile for which the
a character, the method used for the "global"
(default), "single-step"
, "step-down"
or
"unadjusted"
.
a character, the distribution used for the computation of adjusted
"joint"
(default) or "marginal"
.
pvalue()
: a character, the type of "Bonferroni"
(default) or
"Sidak"
.
size()
: a character, the type of rejection region used when computing
the test size: either "p-value"
(default) or "mid-p-value"
.
a numeric, the nominal significance level
further arguments (currently ignored).
The object
. A numeric vector or matrix.
The methods pvalue
, midpvalue
, pvalue_interval
and
size
compute the
For pvalue
, the global method = "global"
) is
returned by default and is given with an associated 99% confidence interval
when resampling is used to determine the null distribution (which for maximum
statistics may be true even in the asymptotic case).
The familywise error rate (FWER) is always controlled under the global null
hypothesis, i.e., in the weak sense, implying that the smallest
adjusted
Assuming subset pivotality, single-step or free step-down
adjusted method
to "single-step"
or "step-down"
respectively. In
both cases, the distribution
argument specifies whether the adjustment
is based on the joint distribution ("joint"
) or the marginal
distributions ("marginal"
) of the test statistics. For procedures
based on the marginal distributions, Bonferroni- or <U+0160>id<U+00E1>k-type
adjustment can be specified through the type
argument by setting it to
"Bonferroni"
or "Sidak"
respectively.
The
Unadjusted method = "unadjusted"
.
For midpvalue
, the global mid-
The pvalue_interval
was proposed by Berger (2000, 2001), where the upper endpoint
For size
, the test size, i.e., the actual significance level, at the
nominal significance level type = "p-value"
, default)
or the mid-type = "mid-p-value"
). The test size is, in
contrast to the
Berger, V. W. (2000). Pros and cons of permutation tests in clinical trials. Statistics in Medicine 19(10), 1319--1328. 10.1002/(SICI)1097-0258(20000530)19:10<1319::AID-SIM490>3.0.CO;2-0
Berger, V. W. (2001). The
Bretz, F., Hothorn, T. and Westfall, P. (2011). Multiple Comparisons Using R. Boca Raton: CRC Press.
Hirji, K. F., Tan, S.-J. and Elashoff, R. M. (1991). A quasi-exact test for comparing two binomial proportions. Statistics in Medicine 10(7), 1137--1153. 10.1002/sim.4780100713
Westfall, P. H. and Troendle, J. F. (2008). Multiple testing with minimal assumptions. Biometrical Journal 50(5), 745--755. 10.1002/bimj.200710456
Westfall, P. H. and Wolfinger, R. D. (1997). Multiple tests with discrete distributions. The American Statistician 51(1), 3--8. 10.1080/00031305.1997.10473577
Westfall, P. H. and Young, S. S. (1993). Resampling-Based Multiple
Testing: Examples and Methods for
# NOT RUN {
## Two-sample problem
dta <- data.frame(
y = rnorm(20),
x = gl(2, 10)
)
## Exact Ansari-Bradley test
(at <- ansari_test(y ~ x, data = dta, distribution = "exact"))
pvalue(at)
midpvalue(at)
pvalue_interval(at)
size(at, alpha = 0.05)
size(at, alpha = 0.05, type = "mid-p-value")
## Bivariate two-sample problem
dta2 <- data.frame(
y1 = rnorm(20) + rep(0:1, each = 10),
y2 = rnorm(20),
x = gl(2, 10)
)
## Approximative (Monte Carlo) bivariate Fisher-Pitman test
(it <- independence_test(y1 + y2 ~ x, data = dta2,
distribution = approximate(nresample = 10000)))
## Global p-value
pvalue(it)
## Joint distribution single-step p-values
pvalue(it, method = "single-step")
## Joint distribution step-down p-values
pvalue(it, method = "step-down")
## Sidak step-down p-values
pvalue(it, method = "step-down", distribution = "marginal", type = "Sidak")
## Unadjusted p-values
pvalue(it, method = "unadjusted")
## Length of YOY Gizzard Shad (Hollander and Wolfe, 1999, p. 200, Tab. 6.3)
yoy <- data.frame(
length = c(46, 28, 46, 37, 32, 41, 42, 45, 38, 44,
42, 60, 32, 42, 45, 58, 27, 51, 42, 52,
38, 33, 26, 25, 28, 28, 26, 27, 27, 27,
31, 30, 27, 29, 30, 25, 25, 24, 27, 30),
site = gl(4, 10, labels = as.roman(1:4))
)
## Approximative (Monte Carlo) Fisher-Pitman test with contrasts
## Note: all pairwise comparisons
(it <- independence_test(length ~ site, data = yoy,
distribution = approximate(nresample = 10000),
xtrafo = mcp_trafo(site = "Tukey")))
## Joint distribution step-down p-values
pvalue(it, method = "step-down") # subset pivotality is violated
# }
Run the code above in your browser using DataLab