Academic Cluster Partitions

The academic (UB-HPC) compute cluster is broken up into several partitions (formerly known as "queues") that users can request to have their jobs run on.

SLURM Partition Structure:

Partition Name Time Limit Default Number CPUs Job Submission Limit/user
general-compute
72 hours 1 1000
debug 1 hour 1 4
gpu 72 hours 1 32
largemem 72 hours 1 32

NOTE:

The general-compute partition is the default and does not need to be specified.

The GPU nodes are in the general-compute and gpu partitions.  If you request use of the gpus, please submit to the gpu partition ("--partition=gpu").

 

More Information about Queues and Requesting Resources:

  • The debug queue must be requested explicitly ("--partition=debug") - jobs will not be routed automatically into the debug queue.
    • 2 8-core IBM nodes (2.2GHz) - CPU-L5520
    • 2 8-core DELL nodes (2.13GHz) - CPU-5630
    • 2 12-core nodes - CPU-E5645
    • 1 16-core 2 gpu node - CPU-E5-2660
  • Need more info? You can find more using the sview GUI, by selecting a queue and clicking the detail button, which shows all of the settings for that queue.