p.adjust
Adjust Pvalues for Multiple Comparisons
Given a set of pvalues, returns pvalues adjusted using one of several methods.
 Keywords
 htest
Usage
p.adjust(p, method = p.adjust.methods, n = length(p))
p.adjust.methods
# c("holm", "hochberg", "hommel", "bonferroni", "BH", "BY",
# "fdr", "none")
Arguments
 p
 numeric vector of pvalues (possibly with
NA
s). Any other R is coerced byas.numeric
.  method
 correction method. Can be abbreviated.
 n
 number of comparisons, must be at least
length(p)
; only set this (to nondefault) when you know what you are doing!
Details
The adjustment methods include the Bonferroni correction
("bonferroni"
) in which the pvalues are multiplied by the
number of comparisons. Less conservative corrections are also
included by Holm (1979) ("holm"
), Hochberg (1988)
("hochberg"
), Hommel (1988) ("hommel"
), Benjamini &
Hochberg (1995) ("BH"
or its alias "fdr"
), and Benjamini &
Yekutieli (2001) ("BY"
), respectively.
A passthrough option ("none"
) is also included.
The set of methods are contained in the p.adjust.methods
vector
for the benefit of methods that need to have the method as an option
and pass it on to p.adjust
.
The first four methods are designed to give strong control of the familywise error rate. There seems no reason to use the unmodified Bonferroni correction because it is dominated by Holm's method, which is also valid under arbitrary assumptions.
Hochberg's and Hommel's methods are valid when the hypothesis tests are independent or when they are nonnegatively associated (Sarkar, 1998; Sarkar and Chang, 1997). Hommel's method is more powerful than Hochberg's, but the difference is usually small and the Hochberg pvalues are faster to compute.
The "BH"
(aka "fdr"
) and "BY"
method of
Benjamini, Hochberg, and Yekutieli control the false discovery rate,
the expected proportion of false discoveries amongst the rejected
hypotheses. The false discovery rate is a less stringent condition
than the familywise error rate, so these methods are more powerful
than the others.
Note that you can set n
larger than length(p)
which
means the unobserved pvalues are assumed to be greater than all the
observed p for "bonferroni"
and "holm"
methods and equal
to 1 for the other methods.
Value

A numeric vector of corrected pvalues (of the same length as
p
, with names copied from p
).
References
Benjamini, Y., and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B 57, 289300.
Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics 29, 11651188.
Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 6570.
Hommel, G. (1988). A stagewise rejective multiple test procedure based on a modified Bonferroni test. Biometrika 75, 383386.
Hochberg, Y. (1988). A sharper Bonferroni procedure for multiple tests of significance. Biometrika 75, 800803.
Shaffer, J. P. (1995). Multiple hypothesis testing. Annual Review of Psychology 46, 561576. (An excellent review of the area.)
Sarkar, S. (1998). Some probability inequalities for ordered MTP2 random variables: a proof of Simes conjecture. Annals of Statistics 26, 494504.
Sarkar, S., and Chang, C. K. (1997). Simes' method for multiple hypothesis testing with positively dependent test statistics. Journal of the American Statistical Association 92, 16011608.
Wright, S. P. (1992). Adjusted Pvalues for simultaneous inference. Biometrics 48, 10051013. (Explains the adjusted Pvalue approach.)
See Also
pairwise.*
functions such as pairwise.t.test
.
Examples
library(stats)
require(graphics)
set.seed(123)
x < rnorm(50, mean = c(rep(0, 25), rep(3, 25)))
p < 2*pnorm(sort(abs(x)))
round(p, 3)
round(p.adjust(p), 3)
round(p.adjust(p, "BH"), 3)
## or all of them at once (dropping the "fdr" alias):
p.adjust.M < p.adjust.methods[p.adjust.methods != "fdr"]
p.adj < sapply(p.adjust.M, function(meth) p.adjust(p, meth))
p.adj.60 < sapply(p.adjust.M, function(meth) p.adjust(p, meth, n = 60))
stopifnot(identical(p.adj[,"none"], p), p.adj <= p.adj.60)
round(p.adj, 3)
## or a bit nicer:
noquote(apply(p.adj, 2, format.pval, digits = 3))
## and a graphic:
matplot(p, p.adj, ylab="p.adjust(p, meth)", type = "l", asp = 1, lty = 1:6,
main = "Pvalue adjustments")
legend(0.7, 0.6, p.adjust.M, col = 1:6, lty = 1:6)
## Can work with NA's:
pN < p; iN < c(46, 47); pN[iN] < NA
pN.a < sapply(p.adjust.M, function(meth) p.adjust(pN, meth))
## The smallest 20 Pvalues all affected by the NA's :
round((pN.a / p.adj)[1:20, ] , 4)