Academic (UB-HPC) Compute Cluster Hardware Specs

Core Networking Eqiupment.

CCR core 10G Arista networking equipment

The academic compute cluster available to all faculty at UB is comprised of various Linux "nodes" (a.k.a. servers) with differing hardware specs and manufactured by several different vendors.  The hardware is similar enough that when networked together, users can run complex problems across many nodes to complete their problems faster.  This is known as a "Beowulf cluster."

On this page:

Fun Fact!

In the history of the English language, Beowulf is the earliest surviving epic poem written in English. It is a story about a hero with the strength of many men who defeated a fearsome monster called Grendel.  In computing, a Beowulf class cluster computer is a multicomputer architecture used for parallel computations, i.e., it uses many computers together so that it has the brute force to defeat fearsome number crunching problems.

Disk Layout

  • /user, User $HOME directories, NFS mounted from the CCR SAN to the compute nodes and front-ends.
  • /scratch, Primary high-performance scratch space, located on each compute node (see above for what is available on each /scratch, as it varies for each type of node).
    • Accessible through SLURM, which will automatically create a unique scratch directory in /scratch for each new batch job 
    • All scratch space will be scrubbed automatically at the end of each batch job. Files that need to be stored long term should be kept elsewhere.
  • /gpfs/scratch, globally accessible high-performance parallel scratch space for staging/preserving data between runs