Skip to content

Commit

Permalink
Now availableCores() falls back to SLURM_CPUS_ON_NODE [#22, #412]
Browse files Browse the repository at this point in the history
  • Loading branch information
HenrikBengtsson committed Sep 18, 2020
1 parent 024952f commit d445977
Show file tree
Hide file tree
Showing 3 changed files with 46 additions and 1 deletion.
7 changes: 6 additions & 1 deletion NEWS
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Package: future
===============

Version: 1.18.0-9000 [2020-09-02]
Version: 1.18.0-9000 [2020-09-18]

SIGNIFICANT CHANGES:

Expand Down Expand Up @@ -32,6 +32,11 @@ NEW FEATURES:
the setting on the current system, then an informative FutureWarning is
produced by such futures.

* Now availableCores() better supports Slurm. Specifically, if environment
variable 'SLURM_CPUS_PER_TASK' is not set, which requires that option
--slurm-cpus-per-task=n' is specified, then it falls back to using
'SLURM_CPUS_ON_NODE', which is set when using '--ntasks=n'.

PERFORMANCE:

* Now plan(multisession), plan(cluster, workers = <number>), and
Expand Down
31 changes: 31 additions & 0 deletions R/availableCores.R
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,38 @@ availableCores <- function(constraints = NULL, methods = getOption("future.avail
method <- methods[kk]
if (method == "Slurm") {
## Number of cores assigned by Slurm

## The assumption is that the following works regardless of
## number of nodes requested /HB 2020-09-18
## Example: --cpus-per-task={n}
n <- getenv("SLURM_CPUS_PER_TASK")
if (is.na(n)) {
## Example: --nodes=1 --ntasks={n} (short: -n {n})
## IMPORTANT: 'SLURM_CPUS_ON_NODE' appears to be rounded up if
## --cpu-per-task is specified, e.g. --nodes=2 --cpus-per-task=3 gives
## SLURM_CPUS_ON_NODE=4 and SLURM_CPUS_PER_TASK=3. /HB 2020-09-18
n <- getenv("SLURM_CPUS_ON_NODE")
}

## TODO?: Can we validate above assumptions/results? /HB 2020-09-18
if (FALSE && !is.na(n)) {
## Is any of the following useful?

## Example: --nodes={nnodes} (defaults to 1, short: -N {nnodes})
## From 'man sbatch':
## SLURM_JOB_NUM_NODES (and SLURM_NNODES for backwards compatibility)
## Total number of nodes in the job's resource allocation.
nnodes <- getenv("SLURM_JOB_NUM_NODES")
if (is.na(nnodes)) nnodes <- getenv("SLURM_NNODES")
if (is.na(nnodes)) nnodes <- 1L ## Can this happen? /HB 2020-09-18

## Example: --ntasks={ntasks} (no default, short: -n {ntasks})
## From 'man sbatch':
## SLURM_NTASKS (and SLURM_NPROCS for backwards compatibility)
## Same as -n, --ntasks
ntasks <- getenv("SLURM_NTASKS")
if (is.na(ntasks)) ntasks <- getenv("SLURM_NPROCS")
}
} else if (method == "PBS") {
## Number of cores assigned by TORQUE/PBS
n <- getenv("PBS_NUM_PPN")
Expand Down
9 changes: 9 additions & 0 deletions R/availableWorkers.R
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,15 @@ availableWorkers <- function(methods = getOption("future.availableWorkers.method
if (!identical(nslots, length(w))) {
warning(sprintf("Identified %d workers from the %s file (%s), which does not match environment variable %s = %d", length(w), sQuote("PE_HOSTFILE"), sQuote(pathname), sQuote("NSLOTS"), nslots))
}
} else if (method == "Slurm") {
## From 'man sbatch':
## SLURM_JOB_NODELIST (and SLURM_NODELIST for backwards compatibility)
# List of nodes allocated to the job.
data <- getenv("SLURM_JOB_NODELIST")
if (is.na(data)) data <- getenv("SLURM_NODELIST")

## TODO: Parse 'data' into a hostnames /HB 2020-09-18
## ...
} else if (method == "custom") {
fcn <- getOption("future.availableWorkers.custom", NULL)
if (!is.function(fcn)) next
Expand Down

0 comments on commit d445977

Please sign in to comment.