vetr (version 0.2.7)

bench_mark: Lightweight Benchmarking Function

Description

Evaluates provided expression in a loop and reports mean evaluation time. This is inferior to microbenchmark and other benchmarking tools in many ways except that it has zero dependencies or suggests which helps with package build and test times. Used in vignettes.

Usage

bench_mark(..., times = 1000L, deparse.width = 40)

Arguments

...

expressions to benchmark, are captured unevaluated

times

how many times to loop, defaults to 1000

deparse.width

how many characters to deparse for labels

Value

NULL, invisibly, reports timings as a side effect as screen output

Details

Runs gc() before each expression is evaluated. Expressions are evaluated in the order provided. Attempts to estimate the overhead of the loop by running a loop that evaluates NULL the times times.

Unfortunately because this computes the average of all iterations it is very susceptible to outliers in small sample runs, particularly with fast running code. For that reason the default number of iterations is one thousand.

Examples

Run this code
# NOT RUN {
bench_mark(runif(1000), Sys.sleep(0.001), times=10)
# }

Run the code above in your browser using DataCamp Workspace