Academic Cluster Partitions

The academic (UB-HPC) compute cluster is broken up into several partitions (formerly known as "queues") that users can request to have their jobs run on.

SLURM Partition Structure:

Partition Name Time Limit Default Number CPUs Job Submission Limit/user
debug 1 hour 1 4
general-compute 72 hours 1 1000
gpu
72 hours 1 4*
largemem 72 hours 1 32
skylake
72 hours 1 1000

NOTE:

The general-compute partition is the default and does not need to be specified.

*The GPU nodes are in both the gpu and scavenger partitions.  Note that the scavenger partition is not subject to the job limit (so it can be used if you wish to submit more than the job limit allows), but is preemptable, so make sure that you can checkpoint/restart your application when using scavenger.