Academic Compute Partitions Hardware Specs

The UB-HPC cluster contains several partitions available only to academic users.  These partitions are comprised of various Linux "nodes" (a.k.a. servers) with differing hardware specs and manufactured by several different vendors.  The hardware is similar enough that when networked together, users can run complex problems across many nodes to complete their problems faster.  This is known as a "Beowulf cluster."

For information about the nodes in the "Industry" partition of this cluster, please see this page.

 

On this page:

Fun Fact!

In the history of the English language, Beowulf is the earliest surviving epic poem written in English. It is a story about a hero with the strength of many men who defeated a fearsome monster called Grendel.  In computing, a Beowulf class cluster computer is a multicomputer architecture used for parallel computations, i.e., it uses many computers together so that it has the brute force to defeat fearsome number crunching problems.

Disk Layout

  • /user, User $HOME directories, NFS mounted from the CCR SAN to the compute nodes and front-ends.
  • /scratch, Primary high-performance scratch space, located on each compute node (see above for what is available on each /scratch, as it varies for each type of node).
    • Accessible through SLURM, which will automatically create a unique scratch directory in /scratch for each new batch job 
    • All scratch space will be scrubbed automatically at the end of each batch job. Files that need to be stored long term should be kept elsewhere.
  • /panasas/scratch, globally accessible high-performance parallel scratch space for staging/preserving data between runs.  Being removed from service 2024
  • /vscratch, globally accessible high-performance parallel scratch space for staging/preserving data between runs.  New for users in 2024
 

Ordered from oldest to newest: