Difference between revisions of "SLURM usage summary"

From crtc.cs.odu.edu
Jump to: navigation, search
Line 21: Line 21:
 
= SLURM Cheetsheet =
 
= SLURM Cheetsheet =
 
https://docs.hpc.odu.edu/#slurm-cheat-sheet
 
https://docs.hpc.odu.edu/#slurm-cheat-sheet
 +
 +
= Turing hardware =
 +
 +
{| class="wikitable"
 +
! 1
 +
! hostname
 +
! nodes
 +
! memory (GB)
 +
! cache (MB)
 +
! Model Name
 +
! Turbo
 +
! CPUs per node
 +
! Threads per core
 +
! Cores per socket
 +
! sockets
 +
|-
 +
| 2
 +
| ------------------
 +
| ------
 +
| ------------
 +
| -----------
 +
| ------------------------------------------
 +
| -------
 +
| -------------
 +
| ----------------
 +
| ----------------
 +
| ---------
 +
|-
 +
| 3
 +
| coreV1-22-0*
 +
| 28
 +
| 127
 +
| 20
 +
| Intel(R) Xeon(R) CPU E5-2660    @ 2.20GHz
 +
| 3.00GHz
 +
| 16
 +
| 1
 +
| 8
 +
| 2
 +
|-
 +
| 4
 +
| coreV2-22-0*
 +
| 36
 +
| 126
 +
| 25
 +
| Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
 +
| 3.00GHz
 +
| 20
 +
| 1
 +
| 10
 +
| 2
 +
|-
 +
| 5
 +
| coreV2-25-0*
 +
| 76
 +
| 126
 +
| 25
 +
| Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
 +
| 3.30GHz
 +
| 20
 +
| 1
 +
| 10
 +
| 2
 +
|-
 +
| 6
 +
| coreV2-23-himem-*
 +
| 4
 +
| 757
 +
| 16
 +
| Intel(R) Xeon(R) CPU E5-4610 v2 @ 2.30GHz
 +
| 2.70GHz
 +
| 32
 +
| 1
 +
| 8
 +
| 4
 +
|-
 +
| 7
 +
| coreV3-23-0*
 +
| 50
 +
| 125
 +
| 40
 +
| Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz
 +
| 3.20GHz
 +
| 32
 +
| 1
 +
| 16
 +
| 2
 +
|-
 +
| 8
 +
| coreV4-21-0*
 +
| 30
 +
| 125
 +
| 40
 +
| Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
 +
| 3.00GHz
 +
| 32
 +
| 1
 +
| 16
 +
| 2
 +
|-
 +
| 9
 +
| coreV4-21-himem-*
 +
| 3
 +
| 504
 +
| 40
 +
| Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
 +
| 3.00GHz
 +
| 32
 +
| 1
 +
| 16
 +
| 2
 +
|}

Revision as of 23:26, 19 February 2019

View en-queued jobs

squeue -u <username>

View available nodes

sinfo --state=idle

Create and interactive jobs

salloc [SBATCH ARGUMENTS]

SBATCH Arguments

  • -c <number of cores> OR --cpus-per-task e.g -c 12 for allocating 12 cores
  • -N <number of nodes> OR --nodes <number of nodes> e.g -c 4 for allocating 4 nodes
  • -p <name> OR --partition <name> e.g --partition "himem" for using a node of the high memory group
  • -C <name> OR --constrain <name> e.g --constrain "coreV2*" for using only nodes with hostname starting with coreV2*
  • --exclusive block other jobs running in the node(s)

SLURM Cheetsheet

https://docs.hpc.odu.edu/#slurm-cheat-sheet

Turing hardware

1 hostname nodes memory (GB) cache (MB) Model Name Turbo CPUs per node Threads per core Cores per socket sockets
2 ------------------ ------ ------------ ----------- ------------------------------------------ ------- ------------- ---------------- ---------------- ---------
3 coreV1-22-0* 28 127 20 Intel(R) Xeon(R) CPU E5-2660 @ 2.20GHz 3.00GHz 16 1 8 2
4 coreV2-22-0* 36 126 25 Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz 3.00GHz 20 1 10 2
5 coreV2-25-0* 76 126 25 Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz 3.30GHz 20 1 10 2
6 coreV2-23-himem-* 4 757 16 Intel(R) Xeon(R) CPU E5-4610 v2 @ 2.30GHz 2.70GHz 32 1 8 4
7 coreV3-23-0* 50 125 40 Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz 3.20GHz 32 1 16 2
8 coreV4-21-0* 30 125 40 Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz 3.00GHz 32 1 16 2
9 coreV4-21-himem-* 3 504 40 Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz 3.00GHz 32 1 16 2