Researchers led by Keiya Hirashima at the RIKEN Center for Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS) in Japan, in collaboration with The University of Tokyo and Universitat de Barcelona, have shattered previous limitations in astronomical modeling by creating the first simulation of the Milky Way capable of tracking over 100 billion individual stars throughout 10,000 years of cosmic evolution, a feat accomplished by seamlessly integrating artificial intelligence (AI) with cutting-edge numerical simulation techniques, resulting in a model with 100 times the stellar resolution of prior simulations and generated over 100 times faster. This groundbreaking work, unveiled at the prestigious SC ’25 international supercomputing conference, represents a monumental leap forward for astrophysics, high-performance computing, and the burgeoning field of AI-assisted scientific modeling, with its innovative methodology holding immense promise for revolutionizing large-scale Earth system studies, including critical research in climate and weather prediction.

For decades, astrophysicists have harbored a profound aspiration: to construct simulations of our home galaxy so detailed that they could precisely track the trajectory and evolution of every single star. Such granular models are the holy grail for comparing theoretical frameworks of galactic formation, structure, and stellar genesis directly against the wealth of observational data astronomers meticulously gather. However, the sheer complexity of accurately simulating a galaxy as vast and dynamic as the Milky Way presents an almost insurmountable computational challenge. It necessitates the simultaneous calculation of intricate gravitational interactions, the fluid dynamics of interstellar gas, the complex processes of chemical element synthesis within stars, and the cataclysmic events of supernova explosions, all spanning colossal ranges of both time and spatial scales.

Historically, the pursuit of simulating an entire galaxy like the Milky Way while maintaining the fine-grained detail of individual stars has remained an elusive dream. Even the most sophisticated simulations currently at the forefront of astrophysical research are typically limited to representing systems with a total mass equivalent to roughly one billion suns. This figure falls dramatically short of the estimated 100 billion or more stars that populate our own Milky Way. Consequently, the smallest discrete unit, or "particle," in these advanced models often represents a collective of approximately 100 stars. This averaging effect inevitably smooths over and obscures the unique behaviors of individual stellar entities, thereby limiting the accuracy with which small-scale astrophysical processes can be understood and modeled. A significant hurdle lies in the temporal resolution of these simulations; to faithfully capture rapid and crucial events such as the life cycle and explosive demise of a supernova, simulations must advance in extremely minute time increments.

The act of reducing the simulation’s timestep, while essential for capturing such fleeting phenomena, leads to a dramatic escalation in computational demand. Even with the most advanced physics-based models available today, the computational cost of simulating the Milky Way on a star-by-star basis is staggering. It is estimated that simulating just one million years of galactic evolution would require approximately 315 hours of processing time. Extrapolating this to one billion years of activity would translate into over 36 years of continuous real-time computation. The intuitive solution of simply augmenting the number of supercomputer cores, while seemingly practical, quickly encounters severe limitations. As more processing units are added, energy consumption becomes excessively high, and computational efficiency begins to degrade due to communication overheads and other scaling issues, rendering it an impractical approach for achieving the desired level of detail.

To surmount these formidable barriers, Hirashima and his dedicated team conceptualized and implemented an innovative methodological fusion, artfully blending a deep learning surrogate model with conventional, physics-based simulations. The core of this AI component is a surrogate model meticulously trained on a dataset derived from high-resolution supernova simulations. This training process enabled the AI to learn and predict the intricate ways in which gas disperses and interacts in the 100,000 years following a supernova event, all without imposing any additional computational burden on the primary simulation. This ingenious AI integration allowed the researchers to simultaneously capture the overarching behavior of the galaxy while retaining the capacity to model minute-scale phenomena, including the detailed physics of individual supernovae. To rigorously validate their pioneering approach, the team meticulously compared the simulation’s outputs against results obtained from large-scale runs conducted on RIKEN’s formidable Fugaku supercomputer and The University of Tokyo’s Miyabi Supercomputer System.

The resultant hybrid method delivers true individual-star resolution for galaxies containing more than 100 billion stars, and remarkably, it achieves this with an unprecedented level of speed. The simulation of one million years of galactic evolution now takes a mere 2.78 hours. This remarkable acceleration means that simulating one billion years of galactic history, a task previously estimated to take 36 years, can now be completed in approximately 115 days – a reduction of nearly 99%.

The profound implications of this hybrid AI approach extend far beyond the realm of astrophysics, promising to fundamentally reshape numerous domains within computational science that grapple with the complex task of linking microscopic physical processes with macroscopic behaviors. Fields such as meteorology, oceanography, and climate modeling, all of which are inherently multi-scale and multi-physics in nature, stand to benefit immensely from the development of tools that can dramatically accelerate their intricate simulations.

"I firmly believe that the seamless integration of artificial intelligence with high-performance computing marks a fundamental paradigm shift in how we approach and conquer multi-scale, multi-physics problems across the entire spectrum of computational sciences," asserts Hirashima. "This significant achievement not only demonstrates that AI-accelerated simulations can transcend their origins in pattern recognition to become genuine engines of scientific discovery but also highlights their capacity to help us unravel the cosmic origins of the very elements that paved the way for life itself within our galaxy."