Academic Compute Cluster (UB-HPC)

CCR General Compute Cluster.

CCR's general compute cluster for academic users is made up of over 16,000 CPUs in various configurations.  Many of these servers are kept cool by individual air conditioners and sealed doors.

IMPORTANT: Users submitting MPI jobs need to add the IB constraint

The UB-HPC cluster has some compute nodes that do not have InfiniBand/OmniPath.  Please add the "--constraint=IB" or "--constraint=OPA" as needed to your SLURM scripts.  

#SBATCH --constraint=IB

See the Requesting Specific Node Resources table below.

Server (Node) Types:

Type of Node

Qty

Cores /Node  Clock Rate RAM Network* SLURM TAGS Local /scratch CPU/GPU Details
Compute 96 40 2.10GHz 192GB Infniband (M) IB CPU-Gold-6230 INTEL NIH 835GB Intel Xeon Gold 6230 (2/node)
Compute 86 32 2.10GHz 192GB OmniPath (OPA) OPA CPU-Gold-6130 INTEL MRI 827GB Intel Xeon Gold 6130 (2/node)
Compute
34 16 2.20GHz 128GB Infiniband (M) IB CPU-E5-2660 773GB Intel Xeon E5-2660 (2/node)
Compute 372 12 2.40GHz 48GB

Infiniband

(QL)

IB CPU-E5645 884GB Intel Xeon E5645 (2/node)
Compute (Dell) 128 8 2.13GHz 24GB Infiniband (M) IB CPU-L5630 268GB Intel Xeon L5630 (2/node)
Compute (IBM) 128 8 2.27GHz 24GB Infiniband (M) IB CPU-L5520 268GB Intel Xeon L5520 (2/node)
High Memory Compute (INTEL CPUs) 24 40 2.10GHz 768GB

Infiniband

(M)

IB CPU-Gold-6230 INTEL NIH 3.5TB Intel Xeon Gold 6230 (2/node),
High Memory Compute (INTEL CPUs) 16 32 2.10GHz 768GB OmniPath (OPA) OPA CPU-Gold-6130 INTEL MRI 3.5TB Intel Xeon Gold 6130 (2/node)
High Memory Compute  (INTEL CPUs) 8 32 2.13GHz 256GB Infiniband (QL) IB CPU-E7-4830 INTEL 3.1TB Intel Xeon E7-4830 (4/node)
High Memory Compute (AMD CPUs) 8 32 2.20GHz 256GB Infiniband (QL) IB CPU-6132HE AMD 3.1TB AMD Opteron 6132HE (4/node)
High Memory Compute (INTEL CPUs) 2 32 2.13GHz 512GB Infiniband (QL) IB CPU-E7-4830 INTEL 3.1TB Intel Xeon E7-4830 (4/node)
GPU Compute 8 40 2.1GHz 192GB Infiniband (M) IB CPU-Gold-6230 NIH 845GB

Intel Xeon Gold 6230 (2/node),

NVidia Tesla V100 24GB (2/node)

GPU Compute 16 32 2.1GHz 192GB OmniPath (OPA) OPA CPU-Gold-6130 MRI 827GB Intel Xeon Gold 6130 (2/node), NVidia Tesla V100 (2/node)

* HPC NETWORKS: Infiniband (M) = Mellanox, Infiniband (QL) = Q-Logic, Intel OmniPath (OPA).  All nodes are also on the Ethernet service network.

Requesting Specific Node Resources:

Node Resources Sample SLURM Directives
Multi-node with InfiniBand

--nodes=2 --ntask-per-node=12 --constraint=IB

or 

--nodes=2 --ntasks-per-node=12 --constraint=IB&CPU-E5645

   
40 Core Nodes  
Any 40 core nodes --nodes=1 --ntasks-per-node=40
40 core largemem nodes
--nodes=1 --ntasks-per-node=40 --mem=754000
   
32 Core Nodes  
512GB nodes --partition=largemem --qos=largemem --nodes=1 --ntasks-per-node=32 --mem=512000
256GB nodes (INTEL Processors) --partition=largemem --qos=largemem --nodes=1 --ntasks-per-node=32 --mem=256000 --constraint=CPU-E7-4830
256GB nodes (AMD Processors) --partition=largemem --qos=largemem --nodes=1 --ntasks-per-node=32 --mem=256000 --constraint=CPU-6132HE
Any 32 core nodes --nodes=1 --ntasks-per-node=32  (you must specify CPU tag if you want to ensure you don't get on 40 core node)
   
   
16 Core Nodes
 
Nodes with 16 cores --nodes=1 --ntasks-per-node=16 --constraint=CPU-E5-2660
Nodes with at least 16 cores --nodes=1 --ntasks-per-node=16
   
12 Core Nodes  
Nodes with 12 cores --nodes=1 --ntasks-per-node=12 --constraint=CPU-E5645
Any nodes with at least 12 cores --nodes=1 --ntasks-per-node=12
   
8 Core Nodes  
Nodes with 8 cores (IBM) --nodes=1 --ntasks-per-node=8 --constraint=CPU-L5520
Nodes with 8 cores (DELL) --nodes=1 --ntasks-per-node=8 --constraint=CPU-L5630
Any nodes with at least 8 cores --nodes=1 --ntasks-per-node=8
   
Memory  
Any nodes with at least X GB
--nodes=1 --mem=X000
   
GPUs  
Nodes with GPUs (32 core) --partition=gpu --qos=gpu --nodes=1 --ntasks-per-node=32 --gres=gpu:1 (or gpu:2)
Nodes with GPUs (40 core) --partition=cascade --qos=cascade --nodes=1 --gres=gpu:1 (or gpu:2)