Launched in April 2024 by Governor Kathy Hochul, Empire AI is a bold partnership of New York’s leading public and private universities coming together to establish a state-of-the-art artificial intelligence computing center, housed at SUNY’s University at Buffalo. Empire AI is already facilitating statewide innovation, research, and development of AI technologies.
AI-enhanced medical imaging
A multidisciplinary team will employ Empire AI’s computing power to create an AI tool that helps doctors better diagnose and monitor diseases. It will generate biomedical images by combining pathology reports with image-generating AI models that are trained on medical scans. The tool could lead to more sophisticated models that integrate genetic data with medical imaging.
Using AI to Improve the Speed and Accuracy of Disease Detection from Tissue Samples
This project aims to develop a powerful AI foundation model designed specifically for analyzing histopathology images—microscopic views of tissue samples used to diagnose diseases such as cancer, autoimmune disorders, and infections. Unlike general medical imaging, histopathology requires capturing highly detailed cellular structures and subtle variations that are critical for precise diagnosis. By training on large, diverse datasets and leveraging high‑performance GPU clusters, the model will enable applications ranging from disease classification and automated reporting to advanced image search and retrieval. In partnership with the University at Buffalo’s Pathology Department, this work seeks to advance early detection, improve diagnostic accuracy, and create scalable AI tools for both clinical and research uses.
Combining medical imaging with computational physics to deliver fast, personalized insights into neurovascular health
The Measurement and Physics-Driven Generative Models (MPDGM) project combines advanced AI with physics-based modeling to transform standard brain blood flow images into detailed 3D simulations. By integrating medical imaging and computational physics, the research seeks to capture more nuanced patterns in how blood moves through neurovascular systems. Leveraging UB’s Empire AI supercomputing, this approach has the potential to generate faster, more individualized insights that could one day support physicians in understanding complex brain vessel conditions.
Enhancing Human-AI Interaction
A UB research team within the Department of Computer Science and Engineering and the Department of Communicative Disorders and Sciences focuses on an area of AI research known as personalization of large language models. This involves training models to learn the preferences and behaviors of users to fine-tune their experiences using chat bots. The project will leverage Empire AI to develop tools to ensure that people with ALS (also known as Lou Gehrig’s disease), cerebral palsy and other motor neuron diseases have equal access to AI. Researchers plan to enrich augmentative and alternate communication devices with conversational AI.
AI to study protein structures for drug design
A team of scientists is developing an AI model (SWAXSFold) to improve AlphaFold, a celebrated AI program that predicts the structure of proteins and could lead to the development of new drugs to treat diseases. AlphaFold struggles to consider how proteins change shape due to pH, temperature and other conditions. Empire AI’s computing capacity enables SWAXSFold to address this limitation by using experimental data to predict how proteins look in these environments.
AI-Powered Polypharmacology
UB researchers in the Department of Biomedical Informatics are creating a model (CANDO) that uses AI to analyze billions of interactions between molecules. CANDO’s development will be greatly aided by Empire AI. Its primary goal is to help identify new medicines to treat diseases, however, other potential applications include new research probes, reagents, detergents, pesticides, sensors and more.
Advancing surgical precision and trauma care through AI-enhanced multimodal imaging and data analysis
This research develops advanced AI tools that combine visible and near-infrared imaging to assist surgeons during gastrointestinal procedures. By fusing these multimodal images, the system is designed to support identification of critical anatomy and sterile surgical instruments. Additionally, the AI analyzes electronic medical records and audio communication to potentially inform trauma triage decisions. Together, these technologies aim to advance research into surgical and trauma care applications.







