University at Buffalo - The State University of New York
Skip to Content

Computing Resources

The Center for Computational Research, a leading academic supercomputing facility, maintains a high-performance computing environment, high-end visualization laboratories, and support staff with expertise in advanced computing, modeling and simulation, visualization, and networking. 

On this page:

CCR's large production clusters currently provide more than 100 Tflops of peak compute capacity

High Performance Computing

The Center’s extensive computing facilities, which are housed in a state-of-the-art 4000 sq ft machine room, include a generally accessible (to all UB researchers) Linux cluster with more than 8000 processor cores and QDR Infiniband, a subset (32) of which contain (64) NVidia Tesla M2050 “Fermi” graphics processing units (GPUs).   Industrial partners of the University have access to a cluster with more than 3400 processor cores and FDR Infiniband.  The Center maintains a 3PB IBM GPFS high-performance parallel file system.  The computer visualization laboratory features a tiled display wall, and a VisDuo passive stereo system.  A leading academic supercomputing facility, CCR has more than 170 Tflops of peak performance compute capacity.  CCR additionally hosts a number of clusters and specialized storage devices for various specific departments, projects, and collaborations, interested researchers should contact CCR staff.

Cloud Computing

UB CCR's research cloud, nicknamed Lake Effect, is a subscription-based Infrastructure as a Service (IAAS) cloud that provides root level access to virtual servers and storage on demand.  This means CCR can now provide tech-savy researchers with hardware that is not part of a compute cluster that can be used for testing software and databases, running websites for research projects, conducting proof-of-concept studies for grant funding, and many other things to benefit your research.  The CCR cloud is 100% compatible with Amazon's Web Services (AWS) to allow our users to go between the two services.  More details about the Lake Effect cloud

High End Visualization

The computer visualization laboratory features a tiled display wall, and a VisDuo passive stereo system. The tiled display device was assembled for the purpose of allowing scientific investigations of high-resolution images by teams of scientists working in a comfortable setting. In addition, it has proven to be ideal for urban planning and design efforts. The tiled-display wall is back-projected by 9x EPSON HD Projectors (edge blended with no seams) arranged in a matrix 3 across and 3 high providing 15.2 Megapixels of resolution.  The VisDuo is a ceiling mounted, 2 projector, passive stereo display, used for viewing complex 3D environments, molecular structures, and medical simulations. The stereo effect is realized by each projector producing images for one eye. The output is polarized by special filters and the resulting image is viewed on a custom polarization preserving screen. Users can view the resulting 3D imagery by wearing lightweight polarizing glasses.

Remote Visualization:  CCR offers dedicated compute nodes that host remote visualization capabilities for CCR users that require use of an OpenGL application GUI with access to the CCR Cluster resources.


While users of the CCR clusters may have varying degrees of data storage requirements, most agree they need large amounts of storage and they want it available from all the CCR resources.  At the beginning of 2015, CCR put a new 3 PB (petabyte) IBM GPFS storage system into production to help UB researchers.  This storage is CCR's high performance parallel file system.   With 40GigE connections to the core Arista network, it provides I/O performance in excess of 30 GigaBytes per second sustained in tests.  Designed for high performance and concurrent access, CCR’s GPFS is primarily intended for generation and analyses of large quantities of short-lived data (scratch usage).

In December 2015, CCR put into place an EMC Isilon NAS storage solution which serves as the high reliability core storage for user home and group project directories.  The storage system consists of 1PB of usable storage in a hierarchical storage pool connected to the CCR core network with two 10GigE links per server.  The storage is designed to tolerate simultaneous failures, helping ensure the 24x7x365 availability of the Center's primary storage.   

Please contact us to discuss your data storage needs and how we can best provide assistance for your research projects.

EMC Isilon NAS Storage:

  • User home and group project disk space at CCR is contained in a fault tolerant, high availability network attached storage (NAS) solution
  • CCR provides a total of 1 PB of available disk storage to all the clusters in the Center. 
  • Directories are available via NFS mounts on the clusters and accessible from all compute nodes.
  • Home directories: /user
    • Example: /user/UBIT_username/
    • Default user quota is 5GB.
    • Backed up by UB's Central Computing Enterprise Infrastruture Services department nightly

  • Projects directories: /projects
    • Additional disk space is available for research groups in the project directories.
    • Example for academic users: /projects/academic/mygroup/
    • Example for industrial users: /projects/industry/mygroup [or projectname]
    • Faculty interested in project disk space should contact the CCR staff.
    • The default project directory quota is 500GB.  The quota can be increased to 1TB without charge.
    • If you require more than 1TB of storage, additional space can be purchased at a rate of $700/TB.  This rate is good for the 5 year lifespan of the storage warranty (which ends in December 2020).  After this time, you must buy in to any new storage CCR upgrades to.
    • Backed up by UB's Central Computing Enterprise Infrastruture Services department nightly.  Please note some project spaces are larger than others and may require more than 24 hours to complete a backup.  Consider keeping a separate backup of files that change frequently and are very important to you.

GPFS Scratch Space:

CCR provides 500TB of high performance global scratch space.   

  • /gpfs/scratch
    • Accessible from all front end servers and compute nodes via the GPFS client.
    • Available to all users for temporary use.  Just create yourself a directory in /gpfs/scratch
    • Data that has not been accessed in more than 3 weeks is subject to removal by a scrubber.  Please remove all data promptly after your job completes.
    • There is NO backup of data.

Local Scratch Space:

All servers and compute nodes in all the clusters have local disk space (/scratch).

  • This scratch space is available to batch jobs running on the clusters.
  • Data on local scratch may be subject to removal as soon as the job completes
  • There is NO backup of data in /scratch.