In a groundbreaking achievement poised to redefine our understanding of cosmic evolution and complex Earth systems, researchers led by Keiya Hirashima at the RIKEN Center for Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS) in Japan, in collaboration with The University of Tokyo and Universitat de Barcelona, have unveiled the first Milky Way simulation capable of precisely tracking over 100 billion individual stars through 10,000 years of galactic evolution. This monumental leap, achieved by seamlessly integrating artificial intelligence (AI) with sophisticated numerical simulation techniques, boasts a star count 100 times greater than previous state-of-the-art simulations, while simultaneously generating results over 100 times faster. The implications of this work, presented at the prestigious international supercomputing conference SC ’25, extend far beyond astrophysics, heralding a new era for high-performance computing and AI-driven scientific discovery. The innovative methodology employed by Hirashima’s team holds immense promise for tackling similarly complex, multi-scale challenges in fields such as large-scale Earth system studies, including climate and weather research.

For decades, astrophysicists have harbored the ambitious goal of constructing Milky Way simulations with sufficient fidelity to follow the intricate trajectories and behaviors of each individual star. Such models are crucial for enabling researchers to rigorously test and refine theories concerning galactic evolution, structure, and the complex processes of star formation by directly comparing simulation outputs with vast observational datasets. However, the sheer scale and complexity of accurately modeling a galaxy like the Milky Way present formidable computational hurdles. These simulations necessitate the precise calculation of gravitational interactions, fluid dynamics, the nucleosynthesis of chemical elements, and the explosive violence of supernova events, all across immense ranges of both time and spatial scales. This intricate interplay of physical phenomena makes the task extraordinarily demanding, pushing the boundaries of current computational capabilities.

Historically, scientists have been constrained by the inability to simulate a galaxy as vast as our own while simultaneously maintaining the granular detail required to resolve individual stellar behavior. Even the most advanced simulations available today are typically capable of representing systems with a total mass equivalent to approximately one billion suns, a figure significantly dwarter than the more than 100 billion stars that constitute the Milky Way. Consequently, the smallest discrete unit, or "particle," in these prior models often represented a collective of roughly 100 stars. This averaging effect inherently smoothed out the distinct behaviors of individual stars, thereby limiting the accuracy of simulations at smaller scales and hindering the study of localized phenomena. A primary driver of this limitation is the fundamental trade-off between computational timestep and accuracy. To faithfully capture the transient and rapid evolution of events like supernovae, simulations must advance in extremely small increments of time.

Reducing the simulation timestep, while essential for capturing ephemeral events, dramatically escalates the computational burden. Even with the most sophisticated physics-based models currently available, simulating the Milky Way at the resolution of individual stars would demand approximately 315 hours of computation for every 1 million years of simulated galactic evolution. At this prohibitive rate, generating just 1 billion years of galactic history would consume over 36 years of real-time computation. Simply scaling up by adding more supercomputer cores, a common approach to accelerating simulations, proves to be an impractical solution due to escalating energy consumption and diminishing efficiency as the number of cores increases.

To surmount these daunting obstacles, Hirashima and his distinguished team devised an ingenious hybrid approach. Their method masterfully integrates a deep learning surrogate model with established physical simulation techniques. The deep learning surrogate was meticulously trained on high-resolution supernova simulations, enabling it to learn and predict the complex behavior of gas dynamics in the 100,000 years following a supernova explosion without imposing additional computational demands on the primary simulation engine. This AI component acted as a powerful accelerator, allowing the researchers to accurately capture the overarching galactic dynamics while simultaneously resolving the fine details of small-scale events, including the intricate processes within individual supernovae. The robustness and accuracy of this novel approach were rigorously validated through extensive comparisons with large-scale simulations conducted on RIKEN’s Fugaku supercomputer and The University of Tokyo’s Miyabi Supercomputer System, confirming its remarkable fidelity.

The impact of this AI-augmented methodology is nothing short of transformative. It delivers true individual-star resolution for galaxies containing upwards of 100 billion stars, achieving this feat with an astonishing increase in speed. A simulation of 1 million years of galactic evolution, which previously would have required days of computation, was completed in a mere 2.78 hours. This remarkable acceleration translates to the completion of 1 billion years of galactic history in approximately 115 days, a staggering reduction from the 36 years previously required. This dramatic speedup opens up unprecedented possibilities for exploring longer timescales and more complex scenarios in galactic evolution.

Beyond its profound implications for astrophysics, this hybrid AI approach possesses the potential to revolutionize numerous domains within computational science that grapple with the challenge of linking microscopic physical processes with macroscopic behaviors. Fields such as meteorology, oceanography, and climate modeling, which are inherently multi-scale and multi-physics in nature, stand to benefit immensely from the development of tools that can significantly accelerate complex simulations.

"I firmly believe that the integration of AI with high-performance computing represents a fundamental paradigm shift in how we approach multi-scale, multi-physics problems across the computational sciences," stated Hirashima. "This achievement demonstrably illustrates that AI-accelerated simulations can transcend mere pattern recognition to become a genuine engine for scientific discovery. It empowers us to meticulously trace the emergence of the very elements that underpin life itself within the intricate tapestry of our galaxy." This sentiment underscores the profound potential of AI not just to enhance computational power, but to fundamentally alter our capacity for scientific inquiry and discovery, enabling us to ask and answer questions previously considered intractable.