Published October 15, 2025

XDMoD: 15 Years of Measuring Computing Performance Across NSF’s XSEDE and ACCESS Programs

Time saved through efficiency helps researchers make the most of their resources. High-performance computing (HPC) administrators work to ensure the systems they support are operating efficiently, researchers want to maximize their allocations and job throughput, and the National Science Foundation expects both providers and users to optimize resource use to fully realize the value of its investment in cyberinfrastructure. 

That’s where XDMoD (XD Metrics on Demand) comes in: a widely used tool that offers detailed metrics on how supercomputing resources are being used. It supports HPC centers, the NSF and researchers – particularly those using ACCESS – by providing insights that can lead to improved job performance and more efficient resource utilization. 

XDMoD was born out of a desire to have near-real-time utilization metrics on HPC system use. In 2006, Tom Furlani, who was then Director of the University at Buffalo Center for Computational Research, a flagship institution within the State University of New York (SUNY) system, was frustrated by his inability to easily determine the utilization of the center’s HPC resources. He partnered with one of the center's programmers, Andrew Bruno, who developed UBMoD (UB Metrics on Demand), the precursor to XDMoD.
 
Years later, UBMoD creators responded to an NSF competitive solicitation for the development of a framework to facilitate the monitoring and measurement of its HPC resources, won the bid and got to work. UBMoD formed the basis for today’s ACCESS XDMoD, and includes an open-source version – Open XDMoD, which is widely deployed throughout the U.S.

Since that time, XDMoD has grown in scope and purpose. Many new features have been added over the years. Tom Furlani, PI for ACCESS Metrics, answers some questions about how XDMoD has evolved and what the team plans to do next:

Read the full article