Tensor operations are a form of advanced mathematics that support many modern technologies, especially artificial intelligence. These operations go far beyond the simple calculations most people encounter. A helpful way to picture them is to imagine manipulating a Rubik’s cube in several dimensions at once by rotating, slicing, or rearranging its layers. Humans and traditional computers must break these tasks into sequences, but light can perform all of them at the same time. Today, tensor operations are essential for AI systems involved in image processing, language understanding, and countless other tasks. As the amount of data continues to grow, conventional digital hardware such as GPUs faces increasing strain in speed, energy use, and scalability. The sheer computational demand of modern AI, from understanding nuanced human language to generating photorealistic images, pushes the limits of existing silicon-based processors. This has created a pressing need for novel computing paradigms that can keep pace with the exponential growth of data and the increasing complexity of AI models.
To address these challenges, an international team led by Dr. Yufeng Zhang from the Photonics Group at Aalto University’s Department of Electronics and Nanoengineering has developed a fundamentally new approach. Their method allows complex tensor calculations to be completed within a single movement of light through an optical system. The process, described as single-shot tensor computing, functions at the speed of light. This breakthrough bypasses the sequential processing limitations of electronic computers, offering a glimpse into a future where AI computation is as rapid and fluid as light itself.
"Our method performs the same kinds of operations that today’s GPUs handle, like convolutions and attention layers, but does them all at the speed of light," says Dr. Zhang. "Instead of relying on electronic circuits, we use the physical properties of light to perform many computations simultaneously." This shift from electronic to optical computation represents a paradigm change. Unlike electrons that must traverse physical wires and undergo discrete switching operations, photons can travel unimpeded and interact in ways that naturally lend themselves to parallel processing. The core idea is to leverage the wave-like nature of light to encode and process information in a fundamentally different manner.
The team accomplished this by embedding digital information into the amplitude and phase of light waves, transforming numerical data into physical variations within the optical field. As these light waves interact, they automatically carry out mathematical procedures such as matrix and tensor multiplication, which form the basis of deep learning. By working with multiple wavelengths of light, the researchers expanded their technique to support even more complex, higher-order tensor operations. This ingenious encoding strategy allows for a massive parallelization of computations. Imagine each property of the light wave – its intensity, its phase, and even its color (wavelength) – as a separate dimension or channel for carrying information. When these modulated light waves are guided through a specially designed optical system, their interactions naturally perform the desired mathematical transformations. This is akin to a complex, multi-dimensional dance where the steps themselves represent the computational steps.
"Imagine you’re a customs officer who must inspect every parcel through multiple machines with different functions and then sort them into the right bins," Zhang illustrates. "Normally, you’d process each parcel one by one. Our optical computing method merges all parcels and all machines together — we create multiple ‘optical hooks’ that connect each input to its correct output. With just one operation, one pass of light, all inspections and sorting happen instantly and in parallel." This analogy vividly captures the essence of the innovation. In a traditional system, each "machine" (computational step) would need to process the "parcel" (data) sequentially. The optical approach, however, can process all parcels through all virtual "machines" simultaneously, with the light itself acting as the conduit and the intermediary. The "optical hooks" represent the precise physical configurations of the optical system that guide the light to perform the correct operations.
One of the most striking benefits of this method is how little intervention it requires. The necessary operations occur on their own as the light travels, so the system does not need active control or electronic switching during computation. This inherent "passivity" of the optical processing is a key advantage, contributing to its efficiency and speed. Traditional electronic processors require constant switching of electrical signals, which consumes energy and introduces delays. In contrast, once the light is modulated and guided, the computation unfolds organically through the physical interactions of light waves. This "analog" nature of optical computation, where physical properties directly represent computational states, allows for a far more direct and efficient execution of complex mathematical functions.
"This approach can be implemented on almost any optical platform," says Professor Zhipei Sun, leader of Aalto University’s Photonics Group. "In the future, we plan to integrate this computational framework directly onto photonic chips, enabling light-based processors to perform complex AI tasks with extremely low power consumption." The potential for integration onto photonic chips is a crucial step towards practical deployment. Photonic chips, which use light instead of electricity to transmit information, are already being developed for various applications. Merging this novel tensor computing capability with existing photonic chip technology could lead to the creation of AI accelerators that are orders of magnitude more efficient than their silicon counterparts. The reduction in power consumption is particularly significant, as the energy demands of AI are a growing concern, limiting the deployment of powerful AI systems in mobile devices, edge computing, and large-scale data centers.
Zhang notes that the ultimate objective is to adapt the technique to existing hardware and platforms used by major technology companies. He estimates that the method could be incorporated into such systems within 3 to 5 years. This timeline suggests that the research is not purely theoretical but has a clear path towards real-world application. The ability to adapt this optical computing paradigm to established technological ecosystems is vital for its widespread adoption. Major tech companies are constantly seeking ways to enhance AI performance and efficiency, and this light-based approach offers a compelling solution. The integration into existing workflows and hardware would minimize the disruption and accelerate the transition to a new era of AI computation.
"This will create a new generation of optical computing systems, significantly accelerating complex AI tasks across a myriad of fields," he concludes. The implications of this research extend far beyond incremental improvements. It has the potential to unlock entirely new capabilities in AI, enabling applications that are currently infeasible due to computational limitations. Fields such as drug discovery, climate modeling, advanced robotics, and hyper-realistic virtual environments could see dramatic advancements. The ability to process information at the speed of light, with unparalleled parallelism and energy efficiency, opens up a universe of possibilities for artificial intelligence and its impact on society. The study was published in Nature Photonics on November 14th, 2025, marking a significant milestone in the quest for faster, more powerful, and more sustainable AI. This publication signifies the rigorous peer-review process and the scientific community’s recognition of the importance and validity of this groundbreaking research.

