This pioneering work, unveiled at the prestigious international supercomputing conference SC ’25, signifies a transformative leap forward for the fields of astrophysics, high-performance computing, and AI-assisted scientific modeling. The innovative strategy employed by Hirashima’s team holds immense promise, with the potential to be adapted for large-scale Earth system studies, including critical research into climate and weather patterns.

The Intractable Challenge of Modeling Every Star

For decades, astrophysicists have harbored the ambitious goal of constructing Milky Way simulations with a level of detail sufficient to follow the trajectory and evolution of each individual star. Such comprehensive models would empower researchers to directly juxtapose theoretical frameworks of galactic evolution, structure, and star formation with empirical observational data. However, the pursuit of an accurate galactic simulation is fraught with immense computational challenges. It necessitates the intricate calculation of gravitational forces, fluid dynamics, the nucleosynthesis of chemical elements, and the dynamic processes of supernova explosions, all across vast expanses of both time and space. This complexity renders the task extraordinarily demanding.

Historically, scientists have been unable to simulate a galaxy as immense as the Milky Way while simultaneously preserving the fine-grained detail at the individual star level. The most sophisticated simulations currently in operation can represent systems with a total mass equivalent to approximately one billion suns, a figure significantly dwarfed by the more than 100 billion stars that constitute our Milky Way. Consequently, the smallest discernible "particle" within these existing models typically represents a collective of roughly 100 stars. This averaging effect inherently smooths out the nuanced behavior of individual stars, thereby limiting the accuracy of simulations for small-scale astrophysical processes. The fundamental constraint is intimately linked to the time interval between computational steps. To accurately capture fleeting and energetic events, such as the rapid evolution of a supernova, a simulation must advance through extremely minute time increments.

Reducing the timestep in a simulation directly translates to a dramatic escalation in computational effort. Even with the most advanced physics-based models available today, simulating the Milky Way star by star would demand approximately 315 hours for every single million years of galactic evolution. At this prohibitive rate, generating just one billion years of galactic activity would consume over 36 years of real-time computation. Simply augmenting the number of supercomputer cores employed is not a viable or practical solution; doing so leads to excessive energy consumption and a diminishing return in efficiency as more cores are added.

A Novel Deep Learning Approach Revolutionizes Simulation

To surmount these formidable barriers, Hirashima and his team devised an innovative methodology that harmoniously blends a deep learning surrogate model with established physical simulation techniques. This surrogate model was meticulously trained using high-resolution supernova simulations. Through this training, it acquired the ability to predict the dispersion of gas following a supernova explosion over a 100,000-year period, crucially, without imposing any additional computational burden on the primary simulation. This AI component proved instrumental in enabling the researchers to capture the galaxy’s overarching behavior while simultaneously resolving the intricate details of small-scale events, including the specific characteristics of individual supernovae. The team rigorously validated their groundbreaking approach by comparing its outputs against large-scale simulation runs executed on RIKEN’s Fugaku supercomputer and The University of Tokyo’s Miyabi Supercomputer System.

This novel method delivers true individual-star resolution for galaxies containing more than 100 billion stars, and it achieves this with an astonishing degree of speed. The simulation of one million years of galactic evolution now requires a mere 2.78 hours, a stark contrast to the previous requirement. This remarkable acceleration means that simulating one billion years of galactic activity can be completed in approximately 115 days, a fraction of the 36 years previously estimated.

Unlocking Broader Potential for Climate, Weather, and Ocean Modeling

The implications of this hybrid AI approach extend far beyond astrophysics, holding the potential to fundamentally reshape numerous domains within computational science that necessitate the intricate linkage of small-scale physical phenomena with large-scale behavioral patterns. Fields such as meteorology, oceanography, and climate modeling grapple with analogous challenges and stand to benefit immensely from the development of tools that can significantly accelerate complex, multi-scale simulations.

"I believe that the integration of AI with high-performance computing represents a fundamental paradigm shift in how we approach and tackle multi-scale, multi-physics problems across the entire spectrum of computational sciences," states Hirashima. He further emphasizes, "This achievement also unequivocally demonstrates that AI-accelerated simulations can transcend mere pattern recognition to become an indispensable tool for genuine scientific discovery—empowering us to meticulously trace the very origins of the elements that underpin life itself, as they emerged and evolved within our galaxy." The ability to model such vast numbers of stars with such unprecedented speed and detail opens up new avenues for understanding the chemical enrichment of the galaxy and, by extension, the conditions necessary for life.