In a groundbreaking achievement poised to redefine our understanding of galactic evolution and computational science, researchers led by Keiya Hirashima at the RIKEN Center for Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS) in Japan, in collaboration with esteemed partners from The University of Tokyo and Universitat de Barcelona in Spain, have unveiled the first-ever simulation of the Milky Way galaxy capable of tracking the intricate dance of over 100 billion individual stars across a span of 10,000 years. This monumental leap forward was made possible by the synergistic integration of cutting-edge artificial intelligence (AI) techniques with sophisticated numerical simulation methodologies. The resultant model boasts a staggering 100-fold increase in the number of stars represented compared to the most advanced simulations previously available, all while achieving its immense computational feat over 100 times faster than conventional approaches. This seminal work, formally presented at the prestigious international supercomputing conference SC ’25, signifies a paradigm shift not only for the field of astrophysics but also for the frontiers of high-performance computing and AI-driven scientific modeling. Crucially, the innovative strategy employed by Hirashima’s team holds immense promise for application in other complex, large-scale scientific domains, including vital Earth system studies such as climate and weather research.

For decades, the aspiration of astrophysicists has been to construct Milky Way simulations with a fidelity so profound that they could meticulously follow the trajectory and evolution of every single star within our galactic home. Such highly detailed models would provide an invaluable empirical bedrock for astrophysicists, enabling direct and rigorous comparisons between theoretical frameworks of galactic evolution, structure, and the processes of star formation, and the wealth of observational data gathered by astronomers. However, the sheer complexity of accurately simulating a galaxy as vast and dynamic as the Milky Way presents an astronomical challenge. It necessitates the precise calculation of gravitational forces, the nuanced behavior of interstellar gas and dust (fluid dynamics), the intricate processes of nucleosynthesis responsible for the creation of chemical elements, and the cataclysmic events of supernova explosions, all of which must be accounted for across immense scales of both time and space. This confluence of factors renders the task extraordinarily computationally demanding.

Historically, scientists have been unable to simulate a galaxy of the Milky Way’s immense size while simultaneously preserving the fine-grained detail required to track individual stars. The most sophisticated simulations currently in existence can represent galactic systems with a total mass equivalent to approximately one billion suns, a figure that falls significantly short of the more than 100 billion stars that constitute our Milky Way. Consequently, the smallest discrete "particle" or computational unit in these existing models typically represents a collective of roughly 100 stars. This averaging effect inherently smooths out the distinct behaviors of individual stars, thereby limiting the accuracy with which small-scale processes, which are often crucial for understanding galactic evolution, can be accurately modeled. The fundamental obstacle is intrinsically linked to the time step interval required for simulations. To faithfully capture rapid and transient phenomena, such as the complex evolution of a supernova explosion, a simulation must advance in exceedingly small increments of time.

Reducing the simulation’s time step, while necessary for accuracy, leads to a dramatic and exponential increase in computational effort. Even with the most advanced physics-based models available today, simulating the Milky Way on a star-by-star basis would demand approximately 315 hours of computation for every single million years of galactic evolution. At such a prohibitive rate, generating just one billion years of galactic activity would require over 36 years of uninterrupted real-time computation. The intuitive solution of simply augmenting the number of supercomputer cores, while seemingly straightforward, proves to be impractical. As more cores are added, the energy consumption escalates to excessive levels, and the overall computational efficiency begins to diminish, creating a diminishing returns scenario.

To surmount these formidable computational barriers, Hirashima and his dedicated team devised an innovative and elegant methodology that ingeniously merges a deep learning surrogate model with established physical simulation techniques. The AI surrogate was meticulously trained on a dataset derived from high-resolution supernova simulations. Through this training process, the AI learned to predict the dispersal patterns of gas following a supernova explosion over a period of 100,000 years without imposing any additional computational burden on the primary simulation. This AI component proved instrumental, enabling the researchers to accurately capture the overarching behavior of the galaxy while simultaneously resolving the intricate details of small-scale events, including the fine-grained physics of individual supernovae. To rigorously validate the efficacy and reliability of their novel approach, the team conducted extensive comparisons of their results against large-scale simulations executed on RIKEN’s formidable Fugaku supercomputer and The University of Tokyo’s Miyabi Supercomputer System.

The implemented method delivers true individual-star resolution for galaxies containing upwards of 100 billion stars, achieving this unprecedented level of detail with remarkable speed and efficiency. The simulation of one million years of galactic evolution, which previously took an exorbitant amount of time, was reduced to a mere 2.78 hours. This means that simulating one billion years of galactic history, a feat that would have conventionally taken 36 years, can now be accomplished in approximately 115 days, representing a monumental acceleration in research timelines.

The profound implications of this hybrid AI approach extend far beyond the realm of astrophysics, holding the potential to fundamentally reshape numerous areas of computational science that necessitate the intricate linkage of microscopic physical phenomena with macroscopic behaviors. Disciplines such as meteorology, oceanography, and climate modeling grapple with analogous challenges, and they stand to gain immensely from the development of tools that can significantly accelerate complex, multi-scale simulations.

"I firmly believe that the integration of artificial intelligence with high-performance computing represents a fundamental and transformative shift in how we approach and solve multi-scale, multi-physics problems across the entire spectrum of computational sciences," states Hirashima with conviction. "This achievement is not merely a testament to computational power; it powerfully demonstrates that AI-accelerated simulations can transcend their role as mere pattern recognition tools and evolve into genuine engines of scientific discovery. They are now capable of actively assisting us in tracing the very origins of the elements that underpin life itself, revealing their genesis within the complex tapestry of our galaxy."