Last chance! 50% off unlimited learning
Sale ends in
ratiocalc(data, group = NULL, ratio = c("ind", "first"),
which.eff = c("sig", "sli", "exp"), iter = c("combs", "perms"),
rep.all = TRUE, ttest = c("cp", "Ecp"), ...)
propagate
),
number of observations (n
) and p-values from the t-test for each
permutation/combination of runs/replicates (p.value
).propagate
.group
variable must be defined for the different target and reference runs. In general, target PCRs are defined by (replicate)
numbers < 100, while reference PCRs are >= 100. Runs are matched by x vs. x + 100. If no grouping vector is defined, PCR runs are treated as single.
If reference PCRs are given, their number must match the number of target PCRs. Both target and reference data/grouping can be mixed but should be defined in ascending order,
i.e. NOT c(1,1,2,2,102,102,101,101).
Examples:
No replicates: NULL (as is).
Three runs with four replicates each: c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3) or gl(3, 4).
Six single runs with reference data: c(1,2,3,4,5,6,101,102,103,104,105,106).
Three runs with two replicates each and reference data: c(1,101,1,101,2,102,2,102,3,103,3,103).
Same as above but in different order: c(1,1,2,2,3,3,101,101,102,102,103,103).
Ratios and propagated errors are calculated for all pairwise permutations/combinations of (replicated) runs and normalized against reference data, if these are supplied.
Different values for the efficiency can be applied within the function such that the calculated ratios are based on individual efficiencies or efficiencies
that are held constant for all runs. In detail this means for the different values of ratio
:
"ind":
propagate
and often seem quite high. This largely depends on the error of the base (i.e. efficiency)
of the exponential function. The error usually decreases when setting cov = TRUE
in the ...
part of the function. It can be debated anyhow,
if the variables 'efficiency' and 'threshold cycles' have a covariance structure. As the efficiency is deduced at the second derivative maximum of the sigmoidal
curve, variance in the second should show an effect on the first, such that the use of a var-covar matrix might be feasible. It is also commonly encountered
that the propagated error is much higher when using reference data, as the number of partial derivatives functions increases.
The t-test can either be conducted on the crossing points (cp) or on $E^{cp}$, using the efficiency (E) as defined above. If reference data is supplied,
the t-test is done using the delta-ct's (or $E^{\Delta ct's}$) from target/reference and/or control/reference. If p.value = -1
, an error occurred in the t.test.## using 'modlist'
DAT <- modlist(reps, 2:9, fct = l5())
GROUP <- c(1, 1, 2, 2, 101, 101, 102, 102)
res <- ratiocalc(DAT, group = GROUP)
print(res$ratio)
## using 'pcrbatch' and combinations
DAT2 <- pcrbatch(reps, 2:9, fct = l5())
GROUP <- c(1, 1, 2, 2, 101, 101, 102, 102)
res <- ratiocalc(DAT2, group = GROUP, iter = "combs")
print(res$ratio)
## using only the efficiency estimate of the
## first curve
res <- ratiocalc(DAT2, group = GROUP, ratio = "first")
print(res$ratio)
## using constant value for the efficiency
res <- ratiocalc(DAT2, group = GROUP, ratio = "first")
print(res$ratio)
## strong differences in calculated error and
## simulated error indicate non-normality of
## propagated error
res <- ratiocalc(DAT, group = GROUP, do.sim = TRUE)
print(res$ratio)
## Does error propagation in qPCR quantitation make sense?
## In ratio calculations based on (E1^cp1)/(E2^cp2),
## only 2\% error in each of the variables result in
## over 50\% propagated error!
x <- NULL
y <- NULL
for (i in seq(0, 0.1, by = 0.01)) {
E1 <- c(1.7, 1.7 * i)
cp1 <- c(15, 15 * i)
E2 <- c(1.7, 1.7 * i)
cp2 <- c(18, 18 * i)
DF <- cbind(E1, cp1, E2, cp2)
res <- propagate(expression((E1^cp1)/(E2^cp2)), DF, type = "stat")
x <- c(x, i * 100)
y <- c(y, (res$errProp/res$evalExpr) * 100)
}
plot(x, y, xlim = c(0, 10), lwd = 2, xlab = "c.v. [%]", ylab = "c.v. (prop) [%]")
Run the code above in your browser using DataLab