The Center for Computational Research, a leading academic supercomputing facility, maintains a high-performance computing environment, on-premise research cloud, and support staff with expertise in advanced computing, modeling and simulation, visualization, and networking.
The Center’s extensive computing facilities, which are housed in a state-of-the-art 4000 sq ft machine room, include a generally accessible (to all UB researchers) Linux cluster with more than 20,000 processor cores and high-performance Infiniband/Omni-Path networks, a subset of which contain NVidia Tesla V100 graphics processing units (GPUs). Industrial partners of the University have access to an additional cluster with more than 3400 processor cores and FDR Infiniband. The Center maintains a 3PB IBM GPFS high-performance parallel filesystem plus a 1.7PB EMC2 Isilon shared network attached filesystem. A leading academic supercomputing facility, CCR has more than 1 PFlop/s of peak performance compute capacity. CCR additionally hosts a number of clusters and specialized storage devices for various specific departments, projects, and collaborations, researchers interested in hosting services should contact CCR staff.
UB CCR's research cloud, nicknamed Lake Effect, is a subscription-based Infrastructure as a Service (IAAS) cloud that provides root level access to virtual servers and storage on demand. This means CCR can now provide tech-savy researchers with hardware that is not part of a compute cluster that can be used for testing software and databases, running websites for research projects, conducting proof-of-concept studies for grant funding, and many other things to benefit your research. The CCR cloud is 100% compatible with Amazon's Web Services (AWS) to allow our users to go between the two services. More details about the Lake Effect cloud
CCR offers dedicated compute nodes that host remote visualization capabilities for CCR users that require use of an OpenGL application GUI with access to the CCR Cluster resources. These are available through CCR's OnDemand Portal.
While users of the CCR clusters may have varying degrees of data storage requirements, most agree they need large amounts of storage and they want it available from all the CCR resources. At the beginning of 2015, CCR put a new 3 PB (petabyte) IBM GPFS storage system into production to help UB researchers. This storage is CCR's high performance parallel file system. With 40GigE connections to the core Arista network, it provides I/O performance in excess of 30 GigaBytes per second sustained in tests. Designed for high performance and concurrent access, CCR’s GPFS is primarily intended for generation and analyses of large quantities of short-lived data (scratch usage).
In December 2015, CCR put into place an EMC Isilon NAS storage solution which serves as the high reliability core storage for user home and group project directories. The storage system consists of 1PB of usable storage in a hierarchical storage pool connected to the CCR core network with two 10GigE links per server. The storage is designed to tolerate simultaneous failures, helping ensure the 24x7x365 availability of the Center's primary storage.
Please contact us to discuss your data storage needs and how we can best provide assistance for your research projects.
CCR maintains several enterprise level networks to handle both the high speed required in HPC but also the large datasets often generated by HPC users. See more details about the various networks in use at CCR