Academic (UB-HPC) Compute Cluster Hardware Specs

Core Networking Eqiupment

CCR core 10G Arista networking equipment

The academic compute cluster available to all faculty at UB is comprised of various Linux "nodes" (a.k.a. servers) with differing hardware specs and manufactured by several different vendors.  The hardware is similar enough that when networked together, users can run complex problems across many nodes to complete their problems faster.  This is known as a "Beowulf cluster."

On this page:

Fun Fact!

In the history of the English language, Beowulf is the earliest surviving epic poem written in English. It is a story about a hero with the strength of many men who defeated a fearsome monster called Grendel.  In computing, a Beowulf class cluster computer is a multicomputer architecture used for parallel computations, i.e., it uses many computers together so that it has the brute force to defeat fearsome number crunching problems.

Disk Layout

  • /user, User $HOME directories, NFS mounted from the CCR SAN to the compute nodes and front-ends.
  • /scratch, Primary high-performance scratch space, located on each compute node (see above for what is available on each /scratch, as it varies for each type of node).
    • Accessible through SLURM, which will automatically create a unique scratch directory in /scratch for each new batch job 
    • All scratch space will be scrubbed automatically at the end of each batch job. Files that need to be stored long term should be kept elsewhere.
  • /gpfs/scratch, globally accessible high-performance parallel scratch space for staging/preserving data between runs
 

Front-end servers for UB-HPC cluster

These servers are for interactive use, job submissions, and debugging code. CPU time limit of 30 minutes in effect to prevent users from running software on the login servers

  • Pool hostname = vortex.ccr.buffalo.edu --> use this name to be put on one of the front end servers.  This helps to distribute the load.  Logging into vortex will put you on one of these two servers (more may be added in the future):
  • Hostname = vortex1.ccr.buffalo.edu and vortex2.ccr.buffalo.edu
  • Vendor = Dell
  • Number of Processor Cores = 32
  • Processor Description:
    • Intel Xeon Gold 6130 CPU @ 2.10GHz Processor
    • 2 sockets, 16 cores per socket
    • Main memory size: 192 GB
  • Operating System: Linux (CentOS 7.5.x)

Dell 8-core Compute Nodes

Accessible only through the SLURM scheduler

  • PowerEdge C6100 - dual quad-core compute nodes
    • Number of nodes = 128
    • Vendor = Dell
    • Number of Processor Cores = 8
    • Processor Description:
      • 8x2.13GHz Intel Xeon L5630 "Westmere" (Nehalem-EP) Processor Cores
      • Main memory size:  24 GB
      • Instruction cache size: 128 Kbytes
      • Data cache size: 128 Kbytes
      • Secondary unified instruction/data cache size: 12 MBytes
    • Operating System: Linux (Centos 7.5.x)
    • InfiniBand Mellanox Technologies MT26428 Network Card
      • QDR InfiniBand 40Gb/s
    • Local scratch disk space is approximately 268GB

IBM 8-core Compute Nodes

Accessible only through the SLURM scheduler

  • iDataPlex - dual quad-core Compute Nodes
    • Number of nodes = 128
    • Vendor = IBM
    • Number of Processor Cores = 8
    • Processor Description:
      • 8 x 2.27GHz Intel Xeon L5520 "Westmere" (Nehalem-EP) Processor Cores
      • Main memory size: 24 GB
      • Instruction cache size: 128 Kbytes
      • Data cache size: 128 Kbytes
      • Secondary unified instruction/data cache size: 12 MBytes
    • Operating System: Linux (Centos 7.5.x)
    • InfiniBand Mellanox Technologies MT26428 Network Card
      • QDR InfiniBand 40Gb/s
    • Local scratch disk space is approximately 268GB

Dell 12-core Compute Nodes

Accessible only through the SLURM scheduler

  • Number of nodes = 372
  • Vendor = Dell
  • Architecture = Dell E5645
  • Number of Processor Cores = 12
  • Processor Description:
    • 12 x 2.40GHz Intel Xeon E5645 Processor Cores
    • Main memory size: 48 GB
    • Instruction cache size: 24576 Kbytes
    • Data cache size: 24576 Kbytes
    • Secondary unified instruction/data cache size: 8 MBytes
  • Operating System: Linux (Centos 7.5.x)
  • InfiniBand Q-Logic InfiniPath_QLE7340 Network Card
    • QDR InfiniBand 40Gb/s
  • Local scratch is approximately 884GB

 

Dell 16-core Compute Nodes

Accessible only through the SLURM scheduler

  • Number of nodes = 32
  •  Dual 8-core Compute Nodes
  • Vendor = Dell
  • Architecture = PowerEdge Server
  • Number of Processor Cores = 16
  • Processor Description:
    • 16x2.20GHz Intel E5-2660 "Sandy Bridge" Xeon Processor Cores
    • Main memory size: 128 GB
    • Instruction cache size: 128 Kbytes
    • Data cache size: 128 Kbytes
    • Secondary unified instruction/data cache size: 20 Mbytes
  • InfiniBand Mellanox Technologies MT26428 Network Card
    • QDR InfiniBand 40Gb/s
  • Local scratch is approximately 770 GB
  • Operating System: Linux (CentOS 7.5.x)

 

IBM 32-core Large Memory Nodes

Accessible only through the SLURM scheduler, must request largemem partition

  • Number of nodes = 8
  • Primary IBM 32 core Compute Nodes
  • Vendor = IBM
  • Architecture = IBM 6132 HE
  • Number of Processor Cores = 32
  • Processor Description:
    • 32x2.20GHz AMD Opteron 6132 HE Processor Cores
    • Main memory size: 256 GB
    • Instruction cache size: 24576 Kbytes
    • Data cache size: 24576 Kbytes
    • Secondary unified instruction/data cache size: 8 MBytes
  • Operating System: Linux (CentOS 7.5.x)
  • InfiniBand Q-Logic InfiniPath_QLE7340 Network Card
    • QDR InfiniBand 40Gb/s
  • Local scratch is approximately 3.1TB

Dell 32-core Large Memory Nodes

Accessible only through the SLURM scheduler, must request largemem partition

  • Number of nodes = 1
  • Vendor = Dell
  • Architecture = Dell E7-4830
  • Number of Processor Cores = 32
  • Processor Description:
    • 32x2.13GHz Intel Xeon CPU E7-4830 Processor Cores
    • Main memory size: 512 GB
    • Instruction cache size: 24576 Kbytes
    • Data cache size: 24576 Kbytes
    • Secondary unified instruction/data cache size: 8 MBytes
  • Operating System: Linux (Centos 7.5.x)
  • InfiniBand Q-Logic InfiniPath_QLE7340 Network Card
    • QDR InfiniBand 40Gb/s
  • Local scratch is approximately 3.1TB
  • Number of nodes = 8
  • Vendor = Dell
  • Architecture = Dell E7-4830
  • Number of Processor Cores = 32
  • Processor Description:
    • 32 x 2.13GHz Intel Xeon CPU E7-4830 Processor Cores
    • Main memory size: 256 GB
    • Instruction cache size: 24576 Kbytes
    • Data cache size: 24576 Kbytes
    • Secondary unified instruction/data cache size: 8 MBytes
  • Operating System: Linux (CentOS 7.5.x)
  • InfiniBand Q-Logic InfiniPath_QLE7340 Network Card
    • QDR InfiniBand 40Gb/s
  • Local scratch disk space is approximately 3.1TB
  • Dell PowerEdge R640 - dual socket CPUs, 16 cores per socket
    • Number of nodes = 16
    • Vendor = Dell
    • Number of CPU cores = 16
    • Number of threads = 32
    • Processor Description:
      • 32 x 2.10GHz Intel Xeon Gold 6310
      • Main memory size: 768 GB
      • Instruction cache size: 32 Kbytes
      • Data cache size: 32 Kbytes
      • Secondary unified instruction/data cache size: 1024 Kbytes
    • Operating System: Linux (CentOS 7.5.x)
    • Omni-Path HFI Silicon 100 Series Network Card
    • Local scratch disk space is approximately 3.5TB

Dell 32-core GPU nodes

Accessible only through the SLURM scheduler, must request gpu partition

  • PowerEdge R740 - dual socket CPUs, 16 cores per socket
    • Number of nodes = 16
    • Vendor = Dell
    • Number of CPU cores = 16
    • Number of threads = 32
    • Processor Description:
      • 32 x 2.10GHz Intel Xeon Gold 6130 CPU
      • Main memory size:  192 GB
      • Clock speed: 1810MHz
      • Instruction cache (L1) size: 32 Kbytes
      • Data cache (L1) size: 32 Kbytes
      • Secondary unified instruction/data cache (L2) size: 1024 Kbytes
    • GPU Description:
      • Number of GPUs: 2
      • NVIDIA Volta Tesla V100 PCIe GPUs
      • 16GB HBM2 Memory in each card
      • Memory bandwidth 900 GB/sec
      • Double-precision performance: 7 TFLOPS
      • Single-precision performance: 14 TFLOPS
  • Operating System: Linux (Centos 7.5.x)
  • Omni-Path HFI Silicon 100 Series Network Card
  • Local scratch is approximately 827GB
  • Number of nodes = 1
    • PowerEdge R910 - quad socket, oct-core Compute Node
    • Vendor = DELL
    • Number of Processor Cores = 32
    • Processor Description:
      • 32x2.0GHz Intel Xeon X7550 "Beckton" (Nehalem-EX) Processor Cores
      • Main memory size: 256GB
      • Instruction cache size: 128 Kbytes
      • Data cache size: 128 Kbytes
      • Secondary unified instruction/data cache size: 18 MBytes
      • Local Hard Drives: 2x500GB SATA (/scratch), 14x100GB SSD (/ss_scratch)
      • Local scratch is approximately 1.9TB total
  • QDR InfiniBand 40Gb/s
  • Operating System: Linux (Centos 7.5.x)
  • InfiniBand Mellanox Technologies MT26428 Network Card

 

Dell 32-core "Skylake" Compute Nodes

Accessible only through the SLURM scheduler, must request skylake partition

  • PowerEdge R440 - dual socket, 16 cores per socket
    • Number of nodes = 86
    • Vendor = Dell
    • Number of CPU cores = 16
    • Number of threads = 32
    • Processor Description:
      • 32 x 2.10GHz Intel Xeon Gold 6130 CPU
      • Main memory size: 192 GB
      • Clock speed: 1810MHz
      • Instruction cache (L1) size: 32 Kbytes
      • Data cache (L1) size: 32 Kbytes
      • Secondary unified instruction/data cache (L2) size: 1024 Kbytes
  • Operating System: Linux (Centos 7.5.x)
  • Omni-Path HFI Silicon 100 Series Network Card
  • Local scratch disk space is approximately 827GB