As experimental and observational facilities generate data at unprecedented scale, accelerating new discoveries increasingly depends on the close integration of scientific instruments, supercomputers, AI, and data resources.
Building on years of work in this area, ALCF continues to develop and demonstrate capabilities and services that connect large-scale scientific facilities and high-performance computing (HPC) resources, helping to advance DOE’s Genesis Mission and its American Science Cloud (AmSC) platform, as well as DOE’s Integrated Research Infrastructure (IRI) program.
As part of this evolution beyond traditional batch-based access to supercomputers, ALCF is pioneering a service-enabled approach in which computing, data analysis and sharing, and AI model training and inference capabilities function as integrated services that support end-to-end scientific workflows.
At the core of this transformation is the long-standing partnership between ALCF and the Advanced Photon Source (APS). Following its recent upgrade, APS now delivers x-ray beams up to 500 times brighter than before, producing dramatically higher data volumes. To keep pace, Argonne teams have developed automated pipelines that stream beamline data to ALCF supercomputers for immediate processing and return results to researchers during live experiments.
This capability is already reshaping x-ray science. At the APS X-ray Photon Correlation Spectroscopy beamline, for example, data generated during experiments now trigger automated analysis on ALCF systems, with results returned in minutes. Researchers can refine hypotheses, adjust parameters, and even synthesize new samples while an experiment is still in progress. Instead of waiting for post-experiment analysis, researchers now benefit from an interactive feedback loop that helps streamline discovery science.
From Infrastructure to Services
Delivering experiment-time computing required a fundamental shift in how computing facilities operate. Traditionally, supercomputer access is built around individual user accounts and queued batch jobs, which are not aligned with the needs of time-sensitive experimental workflows. To address this, ALCF developed and deployed new operational capabilities that support its move toward service-enabled science:
- Service accounts provide secure, experiment-based credentials
- On-demand queues reserve dedicated supercomputing capacity for time-critical workloads
- Globus tools and services automate data movement, computation, and overall workflow orchestration
- Large-scale data resources, including the Eagle system, provide secure storage, sharing, and access controls for users
- ALCF AI Inference Service enables researchers to integrate AI-driven analysis directly into their workflows
Together, these capabilities allow scientists to move data seamlessly from instruments to HPC systems, run large-scale simulations or analyses, apply AI models, and share results across institutions without needing to manually submit jobs or manage the infrastructure. With this approach, computing becomes a dependable background service that supports discovery rather than a separate technical hurdle.
This framework extends beyond APS. At the DIII-D National Fusion Facility, for example, Globus-enabled workflows move data to supercomputers at ALCF or NERSC, where plasma reconstructions and particle trajectory simulations are performed and returned in near-real time between experimental pulses. In cosmology, the OpenCosmo portal provides web-based access to massive simulation datasets and HPC-scale analyses, allowing users to run queries and visualize results without managing data transfers or system-specific complexities. Across light source facilities, standardized ptychography workflows are enabling automated, multi-site data processing.
Collectively, these efforts demonstrate the building blocks of an integrated research infrastructure: interoperable workflows, secure automation, multi-facility resilience, and on-demand access to HPC and AI resources.
Building a Community Around Services
These capabilities are being coordinated through ALCF’s Service-Enabled Science program, which provides access to HPC, AI, and data services that can be embedded directly into scientific workflows.
To help the research community understand and apply these capabilities, ALCF launched its Service-Enabled Science training series to introduce researchers to the facility’s growing portfolio of integrated services and demonstrate how they can be applied in practice.
The series guides experimentalists, computational scientists, and data-intensive research teams through methods for experiment-time computing, workflow automation, large-scale data management, and collaborative analysis. The series kicked off in 2025 with a session focused on enabling experiment-time computing at the APS; future sessions will address additional service-enabled platforms and tools, including efforts such as OpenCosmo.
Through webinars, hands-on demonstrations, and real-world success stories, the series is helping to cultivate a community of researchers who can take full advantage of service-enabled science. This approach is helping to accelerate adoption and allows teams across DOE facilities to leverage HPC, AI, and data resources as integrated services to advance their own scientific research.
Paving the Way for the American Science Cloud
Argonne’s pioneering work in this space directly supports emerging efforts under the American Science Cloud, which is part of DOE’s Genesis Mission, a national AI initiative to build a powerful scientific platform for accelerating discovery science, strengthening national security, and driving energy innovation. The American Science Cloud seeks to create a more connected scientific ecosystem in which computing, data, and instruments operate together to accelerate discovery.
The service-oriented models developed through this work, including experiment-based access controls, automated workflow orchestration, AI-ready infrastructure, and integrated data services, provide a practical foundation for this next phase. By demonstrating secure, scalable mechanisms for linking experimental facilities with leadership computing systems, Argonne is laying the groundwork for computing, data, and instruments to work together seamlessly to accelerate discovery.