Computing Resources

The Center for Computational Research, a leading academic supercomputing facility, maintains a high-performance computing environment, on-premise research cloud, and support staff with expertise in advanced computing, modeling and simulation, visualization, and networking. 

On this page:

CCR's large production clusters currently provide more than 100 Tflops of peak compute capacity

High Performance Computing

The Center’s extensive computing facilities, which are housed in a state-of-the-art 4000 sq ft machine room, include a Linux cluster with more than 20,000 processor cores and high-performance Infiniband/Omni-Path networks, a subset of which contain Nvidia  Tesla V100 graphics  processing  units  (GPUs),  accessible to all UB researchers.  Industrial partners of the University have access to a separate partition in this cluster with more than 5500 processor cores, a subset of nodes with GPUs,  and connected via HDR Infiniband.  The  Center  maintains a  1.5PB  Panasas high-performance parallel  filesystem plus  a 2PB  Vast Data shared network attached  filesystem.  A leading academic supercomputing facility, CCR has more than 2 PFlop/s of peak performance compute capacity.  CCR  additionally  hosts  a  number  of  clusters  and  specialized  storage  devices  for  various  specific departments,  projects,  and  collaborations,  researchers interested in hosting  services should contact CCR staff.

Cloud Computing

UB CCR's research cloud, nicknamed Lake Effect, is a subscription-based Infrastructure as a Service (IAAS) cloud that provides root level access to virtual servers and storage on demand.  This means CCR can now provide tech-savy researchers with hardware that is not part of a compute cluster that can be used for testing software and databases, running websites for research projects, conducting proof-of-concept studies for grant funding, and many other things to benefit your research.  The CCR cloud is compatible with Amazon's Web Services (AWS) EC2 service to allow our users to go between the two services.  More details about the Lake Effect cloud


Remote Visualization

CCR offers dedicated compute nodes that host remote visualization capabilities for CCR users that require use of an OpenGL application GUI with access to the CCR Cluster resources.  These are available through CCR's OnDemand Portal.


While users of the CCR clusters may have varying degrees of data storage requirements, most agree they need large amounts of storage and they want it available from all the CCR resources.  

In January 2021, CCR installed a Vast Data, Inc. storage solution which serves as the high reliability core storage for user home and group project directories.  The storage system consists of 2PB of usable storage in a hierarchical storage pool connected to the CCR core network with eight 40GigE links.  The storage is designed to tolerate simultaneous failures, helping ensure the 24x7x365 availability of the Center's primary storage.   

In August of 2020, CCR brought a new 1.5 PB (petabyte) Panasas ActiveStor Ultra storage system online to provide scratch storage for cluster users.  This storage is CCR's high performance parallel file system.   With 40GigE connections to the core Arista network, it provides I/O performance in excess of 30 GigaBytes per second sustained in tests.  Designed for high performance and concurrent access, CCR’s Panasas scratch is primarily intended for the generation and analyses of large quantities of short-lived data.

Please contact us to discuss your data storage needs and how we can best provide assistance for your research projects.

More details on enterprise user home & project directories, high speed scratch, and cloud storage


CCR maintains several enterprise level networks to handle both the high speed required in HPC but also the large datasets often generated by HPC users.  See more details about the various networks in use at CCR