Difference between revisions of "SLURM usage summary"
From crtc.cs.odu.edu
m (Ctsolakis moved page SLURM usage to SLURM usage summary without leaving a redirect) |
|||
Line 18: | Line 18: | ||
* <code> -C <name> </code> OR <code> --constrain <name> </code> e.g <code> --constrain "coreV2*" </code> for using only nodes with hostname starting with coreV2* | * <code> -C <name> </code> OR <code> --constrain <name> </code> e.g <code> --constrain "coreV2*" </code> for using only nodes with hostname starting with coreV2* | ||
* <code> --exclusive </code> block other jobs running in the node(s) | * <code> --exclusive </code> block other jobs running in the node(s) | ||
+ | |||
+ | = SLURM Cheetsheet = | ||
+ | https://docs.hpc.odu.edu/#slurm-cheat-sheet |
Revision as of 15:19, 18 January 2019
Contents
View en-queued jobs
squeue -u <username>
View available nodes
sinfo --state=idle
Create and interactive jobs
salloc [SBATCH ARGUMENTS]
SBATCH Arguments
-
-c <number of cores>
OR--cpus-per-task
e.g-c 12
for allocating 12 cores -
-N <number of nodes>
OR--nodes <number of nodes>
e.g-c 4
for allocating 4 nodes -
-p <name>
OR--partition <name>
e.g--partition "himem"
for using a node of the high memory group -
-C <name>
OR--constrain <name>
e.g--constrain "coreV2*"
for using only nodes with hostname starting with coreV2* -
--exclusive
block other jobs running in the node(s)