In a groundbreaking feat of computational power, scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) have leveraged the colossal might of the Perlmutter supercomputer, utilizing nearly 7,000 NVIDIA GPUs, to create an extraordinarily detailed simulation of a minuscule quantum chip. This ambitious endeavor, which meticulously models a quantum chip measuring a mere 10 millimeters across and 0.3 millimeters thick, with intricate features as small as one micron, represents a significant leap forward in the design and development of future quantum hardware. The simulation’s unparalleled precision allows researchers to predict the behavior of these complex devices with a fidelity previously unattainable, promising to accelerate the realization of powerful and performant quantum computers.

Creating highly detailed computer models of quantum chips is a critical step for scientists, enabling them to foresee how these intricate devices will behave before the costly and time-consuming process of manufacturing begins. This proactive approach empowers researchers to identify and rectify potential design flaws and performance issues at an early stage, ensuring that the final quantum hardware will function precisely as intended. At the forefront of this innovation are Zhi Jackie Yao and Andy Nonaka, researchers from Berkeley Lab’s Applied Mathematics and Computational Research (AMCR) Division and members of the Quantum Systems Accelerator (QSA). Their work involves developing sophisticated electromagnetic simulations, meticulously crafted to support the advancement of cutting-edge quantum hardware.

"The computational model predicts how design decisions affect electromagnetic wave propagation in the chip," explained Nonaka, highlighting the core functionality of their simulation. "This is crucial to make sure proper signal coupling occurs and to avoid unwanted crosstalk, which can lead to errors in quantum computations."

To achieve this remarkable level of detail, the team employed ARTEMIS, an advanced exascale modeling tool. This powerful software was instrumental in simulating and refining a quantum chip that emerged from a collaborative effort between Irfan Siddiqi’s Quantum Nanoelectronics Laboratory at the University of California, Berkeley, and Berkeley Lab’s own Advanced Quantum Testbed (AQT). The significance of this research will be further amplified when Yao presents their findings in a technical demonstration at the upcoming International Conference for High Performance Computing, Networking, Storage, and Analysis (SC25).

The design of quantum chips presents a unique set of challenges, seamlessly blending the principles of microwave engineering with the profound complexities of physics operating at extremely low temperatures. Consequently, a classical electromagnetic simulation platform like ARTEMIS, initially developed under the U.S. Department of Energy’s Exascale Computing Project, proves to be an exceptionally well-suited tool for in-depth studies of these quantum systems.

A Colossal Supercomputer Tackles a Microscopic Quantum Chip

While not every simulation necessitates the deployment of extreme computing resources, this particular project pushed the boundaries of what was thought possible. To meticulously capture the fine details of a highly intricate quantum chip, the research team harnessed the near-full computational capacity of the Perlmutter supercomputer. Over a period of 24 hours, they marshaled almost all of its 7,168 NVIDIA GPUs to model a multilayer chip, an object of exquisite craftsmanship, measuring a mere 10 millimeters in diameter and 0.3 millimeters in thickness, yet boasting internal features as minute as one micron.

"I’m not aware of anybody who’s ever done physical modeling of microelectronic circuits at full Perlmutter system scale. We were using nearly 7,000 GPUs," Nonaka emphasized, underscoring the unprecedented scale of their undertaking. "We discretized the chip into 11 billion grid cells. We were able to run over a million time steps in seven hours, which allowed us to evaluate three circuit configurations within a single day on Perlmutter. These simulations would not have been possible in this time frame without the full system."

This exceptional level of precision is what truly distinguishes this work from previous efforts. Many simulations are forced to simplify quantum chips, treating them as abstract "black boxes" due to inherent computational limitations. However, the availability of thousands of GPUs granted these researchers the power to model the actual physical structure and intricate behavior of the quantum device in its entirety.

"We do full-wave physical-level simulation, meaning that we care about what material you use on the chip, the layout of the chip, how you wire the metal — the niobium or other type of metal wires — how you build the resonators, what’s the size, what’s the shape, what material you use," explained Yao, detailing the granular nature of their modeling. "We care about those physical details, and we include them in our model."

Beyond the detailed structural representation, the simulation also meticulously recreates how the chip would perform during actual experimental conditions. This includes accurately modeling how qubits, the fundamental units of quantum information, interact with each other and with the broader circuit architecture.

Capturing the Nuances of Real-Time Quantum Behavior

By ingeniously combining detailed physical modeling with time-based simulations, the researchers have achieved a truly uncommon capability. Their innovative approach employs Maxwell’s equations in the time domain, a method that allows them to accurately account for nonlinear effects and precisely track the evolution of signals within the quantum chip.

The synergy of these qualities – a profound focus on the physical design of the quantum chip coupled with the ability to simulate its behavior in real time – is precisely what renders this simulation so unique, according to Yao. "The combination is instrumental, because we use the partial differential equation, Maxwell’s equation, and we do it in the time domain so we can incorporate nonlinear behavior. All this adds up to give us one-of-a-kind capability."

The ambitious project received vital support from the National Energy Research Scientific Computing Center (NERSC) through its Quantum Information Science @ Perlmutter program. This program generously allocates valuable computing time to promising quantum research efforts, recognizing their potential to revolutionize scientific discovery. Even within the framework of this program, this particular simulation stood out due to its exceptional scale and ambitious objectives.

"This effort stands out as one of the most ambitious quantum projects on Perlmutter to date, using ARTEMIS and NERSC’s computing capabilities to capture quantum hardware detail over more than four orders of magnitude," remarked Katie Klymko, a NERSC quantum computing engineer who played a key role in the project. This underscores the project’s significance and its impact on the broader quantum computing landscape.

The Horizon of Quantum Chip Modeling: Next Steps and Future Directions

Looking ahead, the dedicated team plans to further expand the scope and precision of their simulations. Their objective is to gain an even more profound and quantitative understanding of the quantum chip and its performance within larger, more complex systems.

"We’d like to do a more quantitative simulation so that we can do a post-process and quantify the spectral behavior of the system," stated Yao, outlining their future research goals. "We’d like to see how the qubit is resonating with the rest of the circuit. In the frequency domain, we’d like to benchmark it with other frequency-domain simulations to give us greater confidence that, quantitatively, the simulation is correct."

Ultimately, the true test of their simulation’s accuracy will be its comparison with empirical reality. Once the quantum chip is fabricated and subjected to rigorous experimental evaluation, the researchers will meticulously compare the experimental results with their theoretical predictions. This iterative process of comparison and refinement will be crucial in further enhancing the fidelity and predictive power of their simulation models.

Yao and Nonaka emphasized that this remarkable achievement was not a solitary endeavor but rather the product of close and collaborative partnerships across various entities within Berkeley Lab and its esteemed partners. This includes vital contributions from AMCR, QSA, AQT, and NERSC, which provided not only the indispensable computing power but also invaluable technical expertise. Bert de Jong, the director of QSA, echoed the significance of this accomplishment, describing it as a pivotal stride forward.

"This unprecedented simulation, made possible by a broad partnership among scientists and engineers, is a critical step forward to accelerate the design and development of quantum hardware," de Jong affirmed. "More powerful, more performant quantum chips will unlock new capabilities for researchers and open up new avenues in science, transforming our understanding of the universe and enabling solutions to some of the world’s most pressing challenges."