Academic Compute Partitions Hardware Specs

The UB-HPC cluster contains several partitions available only to academic users.  These partitions are comprised of various Linux "nodes" (a.k.a. servers) with differing hardware specs and manufactured by several different vendors.  The hardware is similar enough that when networked together, users can run complex problems across many nodes to complete their problems faster.  This is known as a "Beowulf cluster."

For information about the nodes in the "Industry" partition of this cluster, please see this page.

Fun Fact!

In the history of the English language, Beowulf is the earliest surviving epic poem written in English. It is a story about a hero with the strength of many men who defeated a fearsome monster called Grendel.  In computing, a Beowulf class cluster computer is a multicomputer architecture used for parallel computations, i.e., it uses many computers together so that it has the brute force to defeat fearsome number crunching problems.

Login servers are for interactive use, job submissions, editing and transfering files.  CPU time limit of 30 minutes in effect to prevent users from running software on the login servers.  For more information on login servers, including connection information, please refer to our documentation.

Compute nodes are accessible only through the SLURM scheduler (batch or interactive job submissions or via OnDemand).   These node configurations are available in the debug and general-compute partitions of the UB-HPC cluster, unless otherwise specified.  This information changes frequently so please use the snodes command for the most accurate, current conditions, and refer to our documentation for more information

Server (Node) Types (ordered oldest to newest):

Type of Node

# of Nodes

# CPUs

Processor

GPU

RAM

Network

SLURM TAGS

Local /scratch

"Cascade Lake" Standard Compute Node 94 40 Intel Xeon Gold 6230 - 187GB Infiniband CPU-Gold-6230, CASCADE-LAKE-IB 880GB
"Cascade Lake" Large Memory Node 24 40 Intel Xeon Gold 6230 - 754GB Infiniband CPU-Gold-6230, CASCADE-LAKE-IB, LM 3.5TB
"Cascade Lake" GPU Node 8 40 Intel Xeon Gold 6230 2x V100 32GB 754GB Infiniband CPU-Gold-6230, CASCADE-LAKE-IB, V100 880GB
"Cascade Lake" GPU Node 1 48 Intel Gold-6240R 12x A16
15GB
512GB Ethernet Gold-6240R, A16 880GB
"Ice Lake" Standard Compute Node* 32 56 Intel Gold-6330 - 512GB Infiniband CPU-Gold-6330, ICE-LAKE-IB 880GB
"Ice Lake" Large Memory Node* 12 56 Intel Gold-6330 - 1TB Infiniband CPU-Gold-6330, ICE-LAKE-IB 7TB
"Ice Lake" GPU Node* 12 56 Intel Gold-6330 2x A100 40GB 512GB Infiniband CPU-Gold-6330, ICE-LAKE-IB, A100 880GB
"Sapphire Rapids" Standard Compute Node* 75 64 Intel Gold-6448Y - 512GB Infiniband**** CPU-Gold-6448Y, SAPPHIRE-RAPIDS-IB 880GB
"Sapphire Rapids" Large Memory Node* 12 64 Intel Gold-6448Y - 2TB Infiniband CPU-Gold-6448Y, SAPPHIRE-RAPIDS-IB 7TB
"Sapphire Rapids" GPU Node* 4 64 Intel Gold-6448Y H100
80GB
512GB Infiniband CPU-Gold-6448Y, SAPPHIRE-RAPIDS-IB, H100 7TB
Grace Hopper Node** 2 72 ARM Neoverse-V2 GH 200 490GB Ethernet CPU-Neoverse-V2 1.8TB
Viz Node*** 1 48 Intel Gold-6240R 12x A16
15GB
512GB Ethernet Gold-6240R, A16 880GB
Viz Node*** 2 64 Gold-6448Y 2x A40
48GB
512GB Ethernet Gold-6448Y, A40 7TB

* These compute nodes are borrowed from the industry partition when usage is lower.  At times of higher industry usage, there will be fewer compute nodes available for the general-compute partition.  For more information about the industry hardware, please see here.

** These are available in the arm64 partition and currently require a separate allocation in ColdFront.  PIs may request an allocation following our standard instructions.

*** These visualization nodes are only available to access through OnDemand.  You may request the viz partition and QOS in the OnDemand app forms.  The viz partition has a maximum run time of 24 hours and is limited to 1 job per user (queued or running).

**** Not all Sapphire Rapid standard compute nodes are connnected to the Infiniband fabric

 

Class Partition:

This partition is available to professors teaching small courses that might need high performance computing capabilities.  Professors should refer to this information regarding class access and contact us to discuss viablity.  Those who have access to the class partition in the UB-HPC cluster will be able to use the following types of compute nodes:

Type of Node

# of Nodes

# CPUs

Processor

GPU

RAM

Network

SLURM TAGS

Local /scratch

Standard Compute Node 3 64 Intel Gold 6448Y - 512GB Ethernet CPU-Gold-6448Y 880GB
A40 GPU Node 2 64 Intel Gold 6448Y 2x A40
48GB (GPU memory)
512GB Ethernet CPU-Gold-6448Y, A40 7TB
L40S GPU Node 2 64 Intel Platinum 8562Y 2x L40S
48GB (GPU memory)
512GB Ethernet CPU-Platinum-8562Y, L40S 7TB