Tensor operations are a form of advanced mathematics that support many modern technologies, especially artificial intelligence. These operations go far beyond the simple calculations most people encounter. A helpful way to picture them is to imagine manipulating a Rubik’s cube in several dimensions at once by rotating, slicing, or rearranging its layers. Humans and traditional computers must break these tasks into sequences, but light can perform all of them at the same time. Today, tensor operations are essential for AI systems involved in image processing, language understanding, and countless other tasks. As the amount of data continues to grow, conventional digital hardware such as GPUs faces increasing strain in speed, energy use, and scalability.
To address these challenges, an international team led by Dr. Yufeng Zhang from the Photonics Group at Aalto University’s Department of Electronics and Nanoengineering has developed a fundamentally new approach. Their method allows complex tensor calculations to be completed within a single movement of light through an optical system. The process, described as single-shot tensor computing, functions at the speed of light. "Our method performs the same kinds of operations that today’s GPUs handle, like convolutions and attention layers, but does them all at the speed of light," says Dr. Zhang. "Instead of relying on electronic circuits, we use the physical properties of light to perform many computations simultaneously."
The team accomplished this by embedding digital information into the amplitude and phase of light waves, transforming numerical data into physical variations within the optical field. As these light waves interact, they automatically carry out mathematical procedures such as matrix and tensor multiplication, which form the basis of deep learning. By working with multiple wavelengths of light, the researchers expanded their technique to support even more complex, higher-order tensor operations. "Imagine you’re a customs officer who must inspect every parcel through multiple machines with different functions and then sort them into the right bins," Zhang says. "Normally, you’d process each parcel one by one. Our optical computing method merges all parcels and all machines together — we create multiple ‘optical hooks’ that connect each input to its correct output. With just one operation, one pass of light, all inspections and sorting happen instantly and in parallel."
One of the most striking benefits of this method is how little intervention it requires. The necessary operations occur on their own as the light travels, so the system does not need active control or electronic switching during computation. This passive optical processing makes the system remarkably efficient and robust. "This approach can be implemented on almost any optical platform," says Professor Zhipei Sun, leader of Aalto University’s Photonics Group. "In the future, we plan to integrate this computational framework directly onto photonic chips, enabling light-based processors to perform complex AI tasks with extremely low power consumption."
The implications of this research are far-reaching. Current AI hardware, predominantly based on GPUs, consumes significant amounts of energy and struggles with the ever-increasing computational demands of advanced AI models. Optical computing, by leveraging the speed and parallelism of light, offers a potential paradigm shift. Unlike electronic signals that are subject to resistance and heat, light can travel unimpeded, allowing for near-instantaneous computations. The ability to encode information into the amplitude and phase of light waves is a sophisticated form of analog computation, where the physical properties of the light wave directly represent the mathematical operations. This bypasses the need for discrete digital conversion and manipulation, leading to a dramatic increase in speed and a reduction in energy expenditure.
The researchers’ innovative "single-shot" approach is particularly significant. Traditional computing methods, even with highly parallelized architectures like GPUs, still break down complex problems into sequential steps. This optical method, however, allows for all necessary tensor operations to be executed simultaneously as the light propagates through the optical system. This is akin to solving a multidimensional Rubik’s cube by manipulating all its faces and layers at once, rather than performing individual rotations and adjustments one after another. The use of multiple wavelengths of light further enhances the system’s capabilities, enabling it to handle more complex tensor operations, which are crucial for tasks like analyzing high-dimensional data and understanding intricate patterns in AI models.
The analogy of the customs officer effectively illustrates the profound difference this technology makes. In the traditional model, each parcel (data point or computation) must pass through individual inspection machines (processing units) sequentially. With the optical approach, all parcels are essentially merged and inspected simultaneously by a unified system, with the light acting as a conduit that directs each parcel to its correct outcome. This parallel processing at the speed of light promises to dramatically accelerate AI workloads across a wide spectrum of applications.
The passive nature of the optical processing is another key advantage. The computation happens as a direct consequence of the light’s interaction with the optical medium, eliminating the need for active electronic control or switching during the computational process. This not only simplifies the hardware but also significantly reduces power consumption, a critical factor in the development of sustainable and scalable AI systems. The potential for integration onto photonic chips means that these light-based processors could be miniaturized and incorporated into existing technological infrastructure, paving the way for a new generation of AI hardware.
Zhang notes that the ultimate objective is to adapt the technique to existing hardware and platforms used by major technology companies. He estimates that the method could be incorporated into such systems within 3 to 5 years. This ambitious timeline suggests that the research is not merely theoretical but is being developed with practical implementation in mind. The potential for widespread adoption by major tech players underscores the transformative impact this technology could have on the AI landscape. "This will create a new generation of optical computing systems, significantly accelerating complex AI tasks across a myriad of fields," he concludes. The study was published in Nature Photonics on November 14th, 2025, marking a significant milestone in the pursuit of faster, more efficient, and more powerful AI. This breakthrough has the potential to unlock new frontiers in fields ranging from scientific research and drug discovery to autonomous systems and personalized medicine, all powered by the elegant and efficient dance of light.

