benchmark
serves as a more accurate replacement of the often
seen system.time(replicate(1000, expr))
expression. It tries hard to
accurately measure only the time it takes to evaluate expr
.
To achieve this, the sub-millisecond (supposedly nanosecond) accurate
timing functions most modern operating systems provide are used.
Additionally all evaluations of the expressions are done in C++ code to
minimize any measurment error.
benchmark(
...,
times = 100L,
order = c("random", "inorder", "block"),
envir = parent.frame(),
progress = TRUE,
gcFirst = TRUE,
gcDuring = FALSE
)
Captures any number of unevaluated expressions passed to benchmark as named or unnamed arguments.
Integer. Number of times to evaluate each expression.
Character. The order in which the expressions are evaluated.
The environment in which the expressions will be evaluated.
Logical. Show progress bar during expressions evaluation.
Logical. Should a garbage collection be performed immediately before the timing?
Logical. Should a garbage collection be performed immediately
before each iteration, as produced by times
? (very slow)
Object of class benchmark
, which is a data.frame
with
a number of additional attributes and contains the following columns:
The deparsed expression as passed to
benchmark
or the name of the argument if the expression was
passed as a named argument.
The measured execution time of the expression in seconds. The order of the observations in the data frame is the order in which they were executed.
Timer precision in seconds.
Timer error (overhead) in seconds.
Units for time intervals (by default, "s" -- seconds).
Number of repeats for each measurement.
Execution regime.
Whether garbage collection took place before each execution.
(the default) randomizes the execution order
executes each expression in order
executes all repetitions of each expression as one block.
Before evaluating each expression times
times, the overhead of
calling the timing functions and the C++ function call overhead are
estimated. This estimated overhead is subtracted from each measured
evaluation time. Should the resulting timing be negative, a warning
is thrown and the respective value is replaced by 0
. If the timing
is zero, a warning is also raised. Should all evaluations result in one of
the two error conditions described above, an error is raised.
summary.benchmark()
,
mean.benchmark()
,
print.benchmark()
,
plot.benchmark()
,
boxplot.benchmark()
# NOT RUN {
## Measure the time it takes to dispatch a simple function call
## compared to simply evaluating the constant NULL
f <- function() NULL
res <- benchmark(NULL, f(), times = 1000L)
## Print results:
print(res)
## Plot results
boxplot(res)
# }
Run the code above in your browser using DataLab