Difference between revisions of "SLURM usage summary"

From crtc.cs.odu.edu
Jump to: navigation, search
 
(3 intermediate revisions by the same user not shown)
Line 22: Line 22:
 
https://docs.hpc.odu.edu/#slurm-cheat-sheet
 
https://docs.hpc.odu.edu/#slurm-cheat-sheet
  
= Turing hardware =
+
= HPC ODU Documentation =
 +
https://docs.hpc.odu.edu/
 +
 
 +
= Turing cluster =
 +
** TODO add GPU information **
  
 
{| class="wikitable"
 
{| class="wikitable"
! 1
 
 
! hostname
 
! hostname
 
! nodes
 
! nodes
Line 37: Line 40:
 
! sockets
 
! sockets
 
|-
 
|-
| 2
 
| ------------------
 
| ------
 
| ------------
 
| -----------
 
| ------------------------------------------
 
| -------
 
| -------------
 
| ----------------
 
| ----------------
 
| ---------
 
|-
 
| 3
 
 
| coreV1-22-0*
 
| coreV1-22-0*
 
| 28
 
| 28
Line 61: Line 51:
 
| 2
 
| 2
 
|-
 
|-
| 4
 
 
| coreV2-22-0*
 
| coreV2-22-0*
 
| 36
 
| 36
Line 73: Line 62:
 
| 2
 
| 2
 
|-
 
|-
| 5
 
 
| coreV2-25-0*
 
| coreV2-25-0*
 
| 76
 
| 76
Line 85: Line 73:
 
| 2
 
| 2
 
|-
 
|-
| 6
 
 
| coreV2-23-himem-*
 
| coreV2-23-himem-*
 
| 4
 
| 4
Line 97: Line 84:
 
| 4
 
| 4
 
|-
 
|-
| 7
 
 
| coreV3-23-0*
 
| coreV3-23-0*
 
| 50
 
| 50
Line 109: Line 95:
 
| 2
 
| 2
 
|-
 
|-
| 8
 
 
| coreV4-21-0*
 
| coreV4-21-0*
 
| 30
 
| 30
Line 121: Line 106:
 
| 2
 
| 2
 
|-
 
|-
| 9
 
 
| coreV4-21-himem-*
 
| coreV4-21-himem-*
 
| 3
 
| 3
Line 131: Line 115:
 
| 1
 
| 1
 
| 16
 
| 16
 +
| 2
 +
|}
 +
 +
= Wahab cluster =
 +
{| class="wikitable"
 +
! hostname
 +
! nodes
 +
! memory (GB)
 +
! cache (MB)
 +
! Model Name
 +
! Turbo
 +
! CPUs per node
 +
! Threads per core
 +
! Cores per socket
 +
! sockets
 +
|-
 +
| any
 +
| 158
 +
| 384
 +
| -
 +
| Intel® Xeon® Gold 6148    @ 2.40GHz
 +
| 3.70GHz
 +
| 40
 +
| 1
 +
| 20
 
| 2
 
| 2
 
|}
 
|}

Latest revision as of 15:57, 12 June 2019

View en-queued jobs

squeue -u <username>

View available nodes

sinfo --state=idle

Create and interactive jobs

salloc [SBATCH ARGUMENTS]

SBATCH Arguments

  • -c <number of cores> OR --cpus-per-task e.g -c 12 for allocating 12 cores
  • -N <number of nodes> OR --nodes <number of nodes> e.g -c 4 for allocating 4 nodes
  • -p <name> OR --partition <name> e.g --partition "himem" for using a node of the high memory group
  • -C <name> OR --constrain <name> e.g --constrain "coreV2*" for using only nodes with hostname starting with coreV2*
  • --exclusive block other jobs running in the node(s)

SLURM Cheetsheet

https://docs.hpc.odu.edu/#slurm-cheat-sheet

HPC ODU Documentation

https://docs.hpc.odu.edu/

Turing cluster

    • TODO add GPU information **
hostname nodes memory (GB) cache (MB) Model Name Turbo CPUs per node Threads per core Cores per socket sockets
coreV1-22-0* 28 127 20 Intel(R) Xeon(R) CPU E5-2660 @ 2.20GHz 3.00GHz 16 1 8 2
coreV2-22-0* 36 126 25 Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz 3.00GHz 20 1 10 2
coreV2-25-0* 76 126 25 Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz 3.30GHz 20 1 10 2
coreV2-23-himem-* 4 757 16 Intel(R) Xeon(R) CPU E5-4610 v2 @ 2.30GHz 2.70GHz 32 1 8 4
coreV3-23-0* 50 125 40 Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz 3.20GHz 32 1 16 2
coreV4-21-0* 30 125 40 Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz 3.00GHz 32 1 16 2
coreV4-21-himem-* 3 504 40 Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz 3.00GHz 32 1 16 2

Wahab cluster

hostname nodes memory (GB) cache (MB) Model Name Turbo CPUs per node Threads per core Cores per socket sockets
any 158 384 - Intel® Xeon® Gold 6148 @ 2.40GHz 3.70GHz 40 1 20 2