Learn R Programming

batchtools (version 0.9.0)

ExperimentRegistry: ExperimentRegistry Constructor

Description

makeExperimentRegistry constructs a special Registry which is suitable for the definition of large scale computer experiments.

Each experiments consists of a Problem and an Algorithm. These can be parametrized with addExperiments to actually define computational jobs.

Usage

makeExperimentRegistry(file.dir = "registry", work.dir = getwd(),
  conf.file = findConfFile(), packages = character(0L),
  namespaces = character(0L), source = character(0L),
  load = character(0L), seed = NULL, make.default = TRUE)

Arguments

file.dir
[character(1)] Path where all files of the registry are saved. Default is directory “registry” in the current working directory. The provided path will get normalized unless it is given relative to the home directory (i.e., starting with “~”). Note that some templates do not handle relative paths well.

If you pass NA, a temporary directory will be used. This way, you can create disposable registries for btlapply or examples. By default, the temporary directory tempdir() will be used. If you want to use another directory, e.g. a directory which is shared between nodes, you can set it in your configuration file by setting the variable temp.dir.

work.dir
[character(1)] Working directory for R process for running jobs. Defaults to the working directory currently set during Registry construction (see getwd). loadRegistry uses the stored work.dir, but you may also explicitly overwrite it, e.g., after switching to another system.

The provided path will get normalized unless it is given relative to the home directory (i.e., starting with “~”). Note that some templates do not handle relative paths well.

conf.file
[character(1)] Path to a configuration file which is sourced while the registry is created. For example, you can set cluster functions or default resources in it. The script is executed inside the environment of the registry after the defaults for all variables are set, thus you can set and overwrite slots, e.g. default.resources = list(walltime = 3600) to set default resources.

The file lookup defaults to a heuristic which first tries to read “batchtools.conf.R” in the current working directory. If not found, it looks for a configuration file “config.R” in the OS dependent user configuration directory as reported by via rappdirs::user_config_dir("batchtools", expand = FALSE) (e.g., on linux this usually resolves to “~/.config/batchtools/config.R”). If this file is also not found, the heuristic finally tries to read the file “.batchtools.conf.R” in the home directory (“~”). Set to character(0) if you want to disable this heuristic.

packages
[character] Packages that will always be loaded on each node. Uses require internally. Default is character(0).
namespaces
[character] Same as packages, but the packages will not be attached. Uses requireNamespace internally. Default is character(0).
source
[character] Files which should be sourced on the slaves prior to executing a job. Calls sys.source using the .GlobalEnv.
load
[character] Files which should be loaded on the slaves prior to executing a job. Calls load using the .GlobalEnv.
seed
[integer(1)] Start seed for jobs. Each job uses the (seed + job.id) as seed. Default is a random number in the range [1, .Machine$integer.max/2].
make.default
[logical(1)] If set to TRUE, the created registry is saved inside the package namespace and acts as default registry. You might want to switch this off if you work with multiple registries simultaneously. Default is TRUE.

Value

[ExperimentRegistry].

See Also

Other Experiment: addExperiments, removeExperiments, summarizeExperiments

Examples

Run this code
tmp = makeExperimentRegistry(file.dir = NA, make.default = FALSE)

# Definde one problem, two algorithms and add them with some parameters:
addProblem(reg = tmp, "p1",
  fun = function(job, data, n, mean, sd, ...) rnorm(n, mean = mean, sd = sd))
addAlgorithm(reg = tmp, "a1", fun = function(job, data, instance, ...) mean(instance))
addAlgorithm(reg = tmp, "a2", fun = function(job, data, instance, ...) median(instance))
ids = addExperiments(reg = tmp, list(p1 = CJ(n = c(50, 100), mean = -2:2, sd = 1:4)))

# Overview over defined experiments:
getProblemIds(reg = tmp)
getAlgorithmIds(reg = tmp)
summarizeExperiments(reg = tmp)
summarizeExperiments(reg = tmp, by = c("problem", "algorithm", "n"))
ids = findExperiments(prob.pars = (n == 50), reg = tmp)
getJobPars(ids, reg = tmp)

# Chunk jobs per algorithm and submit them:
ids = chunkIds(getJobPars(reg = tmp), group.by = "algorithm", reg = tmp)
submitJobs(ids, reg = tmp)
waitForJobs(reg = tmp)

# Reduce the results of algorithm a1
ids.mean = findExperiments(algo.name = "a1", reg = tmp)
reduceResults(ids.mean, fun = function(aggr, res, ...) c(aggr, res), reg = tmp)

# Join info table with all results and calculate mean of results
# grouped by n and algorithm
ids = findDone(reg = tmp)
pars = getJobPars(ids, reg = tmp)
results = reduceResultsDataTable(ids, fun = function(res) list(res = res), reg = tmp)
tab = ljoin(pars, results)
tab[, list(mres = mean(res)), by = c("n", "algorithm")]

Run the code above in your browser using DataLab