University at Buffalo - The State University of New York
Skip to Content

Computing Resources

The Center for Computational Research, a leading academic supercomputing facility, maintains a high-performance computing environment, high-end visualization laboratories, and support staff with expertise in advanced computing, modeling and simulation, visualization, and networking. 

CCR's large production clusters currently provide more than 100 Tflops of peak compute capacity

High Performance Computing

The Center’s extensive computing facilities, which are housed in a state-of-the-art 4000 sq ft machine room, include a generally accessible (to all UB researchers) Linux cluster with more than 8000 processor cores and QDR Infiniband, a subset (32) of which contain (64) NVidia Tesla M2050 “Fermi” graphics processing units (GPUs).   Industrial partners of the University have access to a cluster with more than 3400 processor cores and FDR Infiniband.  The Center maintains a 3PB IBM GPFS high-performance parallel file system.  The computer visualization laboratory features a tiled display wall, and a VisDuo passive stereo system.  A leading academic supercomputing facility, CCR has more than 170 Tflops of peak performance compute capacity.  CCR additionally hosts a number of clusters and specialized storage devices for various specific departments, projects, and collaborations, interested researchers should contact CCR staff.



High End Visualization

The computer visualization laboratory features a tiled display wall, and a VisDuo passive stereo system. The tiled display device was assembled for the purpose of allowing scientific investigations of high-resolution images by teams of scientists working in a comfortable setting. In addition, it has proven to be ideal for urban planning and design efforts. The tiled-display wall is back-projected by 9x EPSON HD Projectors (edge blended with no seams) arranged in a matrix 3 across and 3 high providing 15.2 Megapixels of resolution.  The VisDuo is a ceiling mounted, 2 projector, passive stereo display, used for viewing complex 3D environments, molecular structures, and medical simulations. The stereo effect is realized by each projector producing images for one eye. The output is polarized by special filters and the resulting image is viewed on a custom polarization preserving screen. Users can view the resulting 3D imagery by wearing lightweight polarizing glasses.

Remote Visualization:  CCR offers dedicated compute nodes that host remote visualization capabilities for CCR users that require use of an OpenGL application GUI with access to the CCR Cluster resources.




While users of the CCR clusters may have varying degrees of data storage requirements, most agree they need large amounts of storage and they want it available from all the CCR resources.  At the beginning of 2015, CCR put a new 3 PB (petabyte) IBM GPFS storage system into production to help UB researchers.  We've designed the system to provide fast I/O for parallel applications as well as provide the security of high availability and reliability.  We've also made sure this storage system can be easily expanded as the storage needs for nearly all areas of research continue to increase exponentially.  Please contact us to discuss your data storage needs and how we can best provide assistance for your research projects.

IBM General Parallel File System (GPFS):

  • User disk space at CCR is contained in a high-performance GPFS storage array. 
  • CCR provides a total of 3 PB of available disk storage to all the clusters in the Center. 
  • Directories are available via the GPFS client on the clusters and accessible from all compute nodes.
  • Home directories: /user
    • Example: /user/UBIT_username/
    • Default user quota is 2GB.
    • Backed up by UB's Central Computing Enterprise Infrastruture Services department

  • Projects directories: /project
    • Example /projects/mygroup/
    • Additional disk space is available for research groups in the project directories.
    • Faculty interested in project disk space should contact the CCR staff.
    • The default {PI} directory quota is 200GB.  The quota can be increased to 1TB without charge.
    • If you require more than 1TB of storage, additional space can be purchased at a rate of $700/TB.  This rate is good for the 5 year lifespan of the storage warranty (which ends in December 2020).  After this time, you must buy in to any new storage CCR upgrades to.
    • Backed up by UB's Central Computing Enterprise Infrastruture Services department

GPFS Scratch Space:

CCR provides 500TB of high performance global scratch space.   

  • /gpfs/scratch
    • Accessible from all compute nodes.
    • Available to all users for temporary use.  Just create yourself a directory in /gpfs/scratch
    • Data that has not been accessed in more than 3 weeks is subject to removal by a scrubber.  Please remove all data promptly after your job completes.
    • There is NO backup of data.

Local Scratch Space:

All servers and compute nodes in all the clusters have local disk space (/scratch).

  • This scratch space is available to batch jobs running on the clusters.
  • Data on local scratch may be subject to removal as soon as the job completes
  • There is NO backup of data in /scratch.