2014 Industrial Compute Cluster

The Hewlett-Packard cluster was funded by a New York State Empire State Development grant to provide the WNY and NY State industrial community with access to state-of-the-art high performance computing resources (hardware, software and consulting services) to help foster economic development.  This cluster consists of 216 HP SL230 Gen8 servers of which 144 are “Parallel” compute nodes with a FDR InfiniBand interconnect. The remaining 72 nodes are “Serial” nodes with GigE connections. The nodes consist of two Intel “Ivy Bridge” Xeon 2.6GHz (E5-2650V2) 8-core processors, 64GB of memory and 500GB of local scratch.  


These nodes are now available in the UB-HPC (Academic) cluster.  More information here

Server (Node) Types:

Type of Node # of Nodes # Cores/Node Clock Rate RAM Network SLURM TAGS Local /scratch
144 16 2.60GHz 64GB FDR Infiniband IB CPU-E5-2650v2
Compute 72 16 2.60GHz 64GB Ethernet Only CPU-E5-2650v2 500GB

Requesting Specific Node Resources:

Sample SLURM Directives
To request Infiniband networked nodes only:
--nodes=2 --ntasks-per-node=8 --constraint=IB

Partitions Available:

The industrial cluster is broken up into several partitions (formerly known as "queues") that users can request to have their jobs run on.  The "compute" partition is only available to industrial partners.  UB faculty and students have the option of running their jobs in the "scavenger" partition.  This allows jobs to run when there are no other pending jobs in the compute partition.  Once an industrial user submits a job requesting resources, jobs in the scavenger partition are stopped and requeued.  Please contact us if you'd like to test your jobs using this preemption feature of the job scheduler.  Note that your jobs MUST be able to checkpoint.  Please see our documentation about checkpointing here.

Partition Name Time Limit Default Number CPUs Notes
72 hours 1  available only to industrial partners
scavenger 72 hour 1 --requeue flag required for jobs to be restarted