Learn R Programming

future.batchtools (version 0.20.0)

batchtools_slurm: A batchtools slurm backend resolves futures in parallel via a Slurm job scheduler

Description

A batchtools slurm backend resolves futures in parallel via a Slurm job scheduler

Usage

batchtools_slurm(
  ...,
  template = "slurm",
  scheduler.latency = 1,
  fs.latency = 65,
  resources = list(),
  delete = getOption("future.batchtools.delete", "on-success"),
  workers = getOption("future.batchtools.workers", default = 100L)
)

Arguments

template

(optional) Name of job-script template to be searched for by batchtools::findTemplateFile(). If not found, it defaults to the templates/slurm.tmpl part of this package (see below).

scheduler.latency

[numeric(1)]
Time to sleep after important interactions with the scheduler to ensure a sane state. Currently only triggered after calling submitJobs.

fs.latency

[numeric(1)]
Expected maximum latency of the file system, in seconds. Set to a positive number for network file systems like NFS which enables more robust (but also more expensive) mechanisms to access files and directories. Usually safe to set to 0 to disable the heuristic, e.g. if you are working on a local file system.

resources

(optional) A named list passed to the batchtools job-script template as variable resources. This is based on how batchtools::submitJobs() works, with the exception for specially reserved names defined by the future.batchtools package;

  • resources[["asis"]] is a character vector that are passed as-is to the job script and are injected as job resource declarations.

  • resources[["modules"]] is character vector of Linux environment modules to be loaded.

  • resources[["startup"]] and resources[["shutdown"]] are character vectors of shell code to be injected to the job script as-is.

  • resources[["details"]], if TRUE, results in the job script outputting job details and job summaries at the beginning and at the end.

  • All remaining resources named elements are injected as named resource specification for the scheduler.

delete

Controls if and when the batchtools job registry folder is deleted. If "on-success" (default), it is deleted if the future was resolved successfully and the expression did not produce an error. If "never", then it is never deleted. If "always", then it is always deleted.

workers

The maximum number of workers the batchtools backend may use at any time, which for HPC schedulers corresponds to the maximum number of queued jobs. The default is getOption("future.batchtools.workers", 100).

...

Not used.

Details

Batchtools slurm futures use batchtools cluster functions created by batchtools::makeClusterFunctionsSlurm(), which are used to interact with the Slurm job scheduler. This requires that Slurm commands sbatch, squeue, and scancel are available on the current machine.

The default template script templates/slurm.tmpl can be found in:

system.file("templates", "slurm.tmpl", package = "future.batchtools")

and comprise:

#!/bin/bash
######################################################################
# A batchtools launch script template for Slurm
#
# Author: Henrik Bengtsson 
######################################################################

## Job name #SBATCH --job-name=<%= job.name %> ## Direct streams to logfile #SBATCH --output=<%= log.file %>

## Resources needed: <% ## Shell "details" code to evaluate details <- isTRUE(resources[["details"]]) resources[["details"]] <- NULL

## Shell "startup" code to evaluate startup <- resources[["startup"]] resources[["startup"]] <- NULL ## Shell "shutdown" code to evaluate shutdown <- resources[["shutdown"]] resources[["shutdown"]] <- NULL ## Environment modules specifications modules <- resources[["modules"]] resources[["modules"]] <- NULL ## As-is resource specifications job_declarations <- resources[["asis"]] resources[["asis"]] <- NULL ## Remaining resources are assumed to be of type '--<key>=<value>' opts <- unlist(resources, use.names = TRUE) opts <- sprintf("--%s=%s", names(opts), opts) job_declarations <- sprintf("#SBATCH %s", c(job_declarations, opts)) writeLines(job_declarations) %>

## Bash settings set -e # exit on error set -u # error on unset variables set -o pipefail # fail a pipeline if any command fails trap 'echo "ERROR: future.batchtools job script failed on line $LINENO" >&2; exit 1' ERR

<% if (length(job_declarations) > 0) { writeLines(c( "echo 'Job submission declarations:'", sprintf("echo '%s'", job_declarations), "echo" )) } %>

<% if (details) { %> if command -v scontrol > /dev/null; then echo "Job information:" scontrol show job "${SLURM_JOB_ID}" echo fi <% } %>

<% if (length(startup) > 0) { writeLines(startup) } %>

<% if (length(modules) > 0) { writeLines(c( "echo 'Load environment modules:'", sprintf("echo '- modules: %s'", paste(modules, collapse = ", ")), sprintf("module load %s", paste(modules, collapse = " ")), "module list" )) } %>

echo "Session information:" echo "- timestamp: $(date +"%Y-%m-%d %H:%M:%S%z")" echo "- hostname: $(hostname)" echo "- Rscript path: $(which Rscript)" echo "- Rscript version: $(Rscript --version)" echo "- Rscript library paths: $(Rscript -e "cat(shQuote(.libPaths()), sep = ' ')")" echo

## Launch R and evaluate the batchtools R job echo "Rscript -e 'batchtools::doJobCollection()' ..." echo "- job name: '<%= job.name %>'" echo "- job log file: '<%= log.file %>'" echo "- job uri: '<%= uri %>'" Rscript -e 'batchtools::doJobCollection("<%= uri %>")' res=$? echo " - exit code: ${res}" echo "Rscript -e 'batchtools::doJobCollection()' ... done" echo

<% if (details) { %> if command -v sstat > /dev/null; then echo "Job summary:" sstat --format="JobID,AveCPU,MaxRSS,MaxPages,MaxDiskRead,MaxDiskWrite" --allsteps --jobs="${SLURM_JOB_ID}" fi <% } %>

<% if (length(shutdown) > 0) { writeLines(shutdown) } %>

echo "End time: $(date +"%Y-%m-%d %H:%M:%S%z")"

## Relay the exit code from Rscript exit "${res}"

This template and the built-in batchtools::makeClusterFunctionsSlurm() have been verified to work on a few different Slurm HPC clusters;

  1. Slurm 21.08.4, Rocky 8 Linux, NFS global filesystem (August 2025)

  2. Slurm 22.05.11, Rocky 8 Linux, NFS global filesystem (August 2025)

  3. Slurm 23.02.6, Ubuntu 24.04 LTS, NFS global filesystem (August 2025)

References

Examples

Run this code
if (FALSE) { # interactive()
library(future)

# Limit runtime to 10 minutes and memory to 400 MiB per future,
# request a parallel environment with four slots on a single host.
# Submit to the 'freecycle' partition. Load environment modules 'r' and
# 'jags'. Report on job details at startup and at the end of the job.
plan(future.batchtools::batchtools_slurm, resources = list(
  time = "00:10:00", mem = "400M",
  asis = c("--nodes=1", "--ntasks=4", "--partition=freecycle"),
  modules = c("r", "jags"),
  details = TRUE
))

f <- future({
  data.frame(
    hostname = Sys.info()[["nodename"]],
          os = Sys.info()[["sysname"]],
       cores = unname(parallelly::availableCores()),
     modules = Sys.getenv("LOADEDMODULES")
  )
})
info <- value(f)
print(info)
}

Run the code above in your browser using DataLab