Computing Resources

The Center for Computational Research, a leading academic supercomputing facility, maintains a high-performance computing environment, on-premise research cloud, and support staff with expertise in advanced computing, modeling and simulation, visualization, and networking. 

On this page:

 CCR cluster.

CCR's large production clusters currently provide more than 100 Tflops of peak compute capacity

High Performance Computing

The Center’s extensive computing facilities, which are housed in a state-of-the-art 4000 sq ft machine room, include a generally accessible (to all UB researchers) Linux cluster with more than 20,000 processor cores and high-performance Infiniband/Omni-Path networks, a subset of which  contain  NVidia  Tesla V100 graphics  processing  units  (GPUs).  Industrial  partners  of  the University  have  access  to  an  additional cluster  with  more  than 3400 processor  cores  and  FDR Infiniband.  The  Center  maintains a  3PB  IBM  GPFS high-performance parallel  filesystem plus  a 1.7PB  EMC2 Isilon  shared  network  attached  filesystem.  A leading academic supercomputing facility, CCR has more than 1 PFlop/s of peak performance compute capacity.  CCR  additionally  hosts  a  number  of  clusters  and  specialized  storage  devices  for  various  specific departments,  projects,  and  collaborations,  researchers interested  in  hosting  services should contact CCR staff.

Cloud Computing

UB CCR's research cloud, nicknamed Lake Effect, is a subscription-based Infrastructure as a Service (IAAS) cloud that provides root level access to virtual servers and storage on demand.  This means CCR can now provide tech-savy researchers with hardware that is not part of a compute cluster that can be used for testing software and databases, running websites for research projects, conducting proof-of-concept studies for grant funding, and many other things to benefit your research.  The CCR cloud is 100% compatible with Amazon's Web Services (AWS) to allow our users to go between the two services.  More details about the Lake Effect cloud

 

Remote Visualization

CCR offers dedicated compute nodes that host remote visualization capabilities for CCR users that require use of an OpenGL application GUI with access to the CCR Cluster resources.  These are available through CCR's OnDemand Portal.

Storage

While users of the CCR clusters may have varying degrees of data storage requirements, most agree they need large amounts of storage and they want it available from all the CCR resources.  

In December 2015, CCR installed an EMC Isilon NAS storage solution which serves as the high reliability core storage for user home and group project directories.  The storage system consists of 1PB of usable storage in a hierarchical storage pool connected to the CCR core network with two 10GigE links per server.  The storage is designed to tolerate simultaneous failures, helping ensure the 24x7x365 availability of the Center's primary storage.   

In August of 2020, CCR brought a new 1.5 PB (petabyte) Panasas ActiveStor Ultra storage system online to provide scratch storage for cluster users.  This storage is CCR's high performance parallel file system.   With 40GigE connections to the core Arista network, it provides I/O performance in excess of 30 GigaBytes per second sustained in tests.  Designed for high performance and concurrent access, CCR’s Panasas scratch is primarily intended for the generation and analyses of large quantities of short-lived data.

Please contact us to discuss your data storage needs and how we can best provide assistance for your research projects.

More details on enterprise user home & project directories, high speed scratch, and cloud storage

 

CCR maintains several enterprise level networks to handle both the high speed required in HPC but also the large datasets often generated by HPC users.  See more details about the various networks in use at CCR