Academic Cluster Partitions

The academic (UB-HPC) compute cluster is broken up into several partitions (formerly known as "queues") that users can request to have their jobs run on.

SLURM Partition Structure:

Partition Name Time Limit Default Number CPUs Job Submission Limit/user
debug 1 hour 1 4
general-compute 72 hours 1 1000
gpu
72 hours 1 4*
largemem 72 hours 1 32
skylake
72 hours 1 1000
cascade** 72 hours 1 1000

NOTE:

The general-compute partition is the default and does not need to be specified.

*The GPU nodes are in both the gpu and scavenger partitions.  Note that the scavenger partition is not subject to the job limit (so it can be used if you wish to submit more than the job limit allows), but is preemptable, so make sure that you can checkpoint/restart your application when using scavenger. 

 

**The cascade partition currently contains all of the nodes purchased for a recent NIH equipment award, inlcuding 24 of the large memory configuration, and 8 GPU nodes with two V100 GPUs/node.  CCR is evluating how well the scheduler is able to handle the mix of resources in a single partition to determine if more partition consolidation is advisable.