The following settings ("methods") for inferring the number of cores
are supported:
"system" -
Query detectCores(logical = logical).
"cgroups.cpuset" -
On Unix, query control group (cgroup) value cpuset.set.
"cgroups.cpuquota" -
On Unix, query control group (cgroup) value
cpu.cfs_quota_us / cpu.cfs_period_us.
"nproc" -
On Unix, query system command nproc.
"mc.cores" -
If available, returns the value of option
mc.cores.
Note that mc.cores is defined as the number of
additional R processes that can be used in addition to the
main R process. This means that with mc.cores = 0 all
calculations should be done in the main R process, i.e. we have
exactly one core available for our calculations.
The mc.cores option defaults to environment variable
MC_CORES (and is set accordingly when the parallel
package is loaded). The mc.cores option is used by for
instance mclapply() of the parallel
package.
"BiocParallel" -
Query environment variables BIOCPARALLEL_WORKER_NUMBER (integer),
which is defined and used by BiocParallel (>= 1.27.2), and
BBS_HOME (logical) used by the Bioconductor Build System. If the
former is set, this is the number of cores considered. If the latter
is set, then a maximum of 4 cores is considered.
"LSF" -
Query Platform Load Sharing Facility (LSF) environment variable
LSB_DJOB_NUMPROC.
Jobs with multiple (CPU) slots can be submitted on LSF using
bsub -n 2 -R "span[hosts=1]" < hello.sh.
"PJM" -
Query Fujitsu Technical Computing Suite (that we choose to shorten
as "PJM") environment variables PJM_VNODE_CORE and
PJM_PROC_BY_NODE.
The first is set when submitted with pjsub -L vnode-core=8 hello.sh.
"PBS" -
Query TORQUE/PBS environment variables PBS_NUM_PPN and NCPUS.
Depending on PBS system configuration, these resource
parameters may or may not default to one.
An example of a job submission that results in this is
qsub -l nodes=1:ppn=2, which requests one node with two cores.
"SGE" -
Query Sun Grid Engine/Oracle Grid Engine/Son of Grid Engine (SGE)
environment variable NSLOTS.
An example of a job submission that results in this is
qsub -pe smp 2 (or qsub -pe by_node 2), which
requests two cores on a single machine.
"Slurm" -
Query Simple Linux Utility for Resource Management (Slurm)
environment variable SLURM_CPUS_PER_TASK.
This may or may not be set. It can be set when submitting a job,
e.g. sbatch --cpus-per-task=2 hello.sh or by adding
#SBATCH --cpus-per-task=2 to the hello.sh script.
If SLURM_CPUS_PER_TASK is not set, then it will fall back to
use SLURM_CPUS_ON_NODE if the job is a single-node job
(SLURM_JOB_NUM_NODES is 1), e.g. sbatch --ntasks=2 hello.sh.
"custom" -
If option
parallelly.availableCores.custom
is set and a function,
then this function will be called (without arguments) and it's value
will be coerced to an integer, which will be interpreted as a number
of available cores. If the value is NA, then it will be ignored.
It is safe for this custom function to call availableCores(); if
done, the custom function will not be recursively called.