Professor Nikolay Brilliantov of Skoltech AI, a co-author of the study, articulated the speculative yet compelling nature of their conclusions. "Our conclusion is of course highly speculative in application to human senses, although you never know: It could be that humans of the future would evolve a sense of radiation or magnetic field," he stated. "But in any case, our findings may be of practical importance for robotics and the theory of artificial intelligence." He elaborated on the core discovery: "It appears that when each concept retained in memory is characterized in terms of seven features—as opposed to, say, five or eight—the number of distinct objects held in memory is maximized."
This research builds upon a rich tradition of memory modeling that originated in the early 20th century. The Skoltech team focused their attention on the fundamental building blocks of memory, termed "engrams." An engram can be conceptualized as a transient and distributed network of neurons across various brain regions that activate in concert. Each engram serves as a neural representation of a specific concept, and critically, each concept is defined by a unique set of features. For humans, these features are intrinsically linked to our sensory experiences. Consider the concept of a banana: its representation in our minds is a composite of its visual appearance, its distinct aroma, its taste, and a myriad of other sensory qualities. Within this sophisticated framework, the banana transforms into a five-dimensional object within a vast, multidimensional "mental space" that encompasses all the memories stored within the brain.
The dynamic nature of engrams is a key aspect of learning and forgetting. These neural assemblies are not static; they evolve over time. Their sharpness, or the clarity of their representation, can increase or decrease depending on the frequency with which they are activated by incoming sensory data from the external world. This continuous process mirrors our own cognitive journey of acquiring new knowledge and gradually losing older information as we navigate and interact with our environment.
Professor Brilliantov further elaborated on the mathematical underpinnings of this evolutionary process. "We have mathematically demonstrated that the engrams in the conceptual space tend to evolve toward a steady state, which means that after some transient period, a ‘mature’ distribution of engrams emerges, which then persists in time," he explained. "As we consider the ultimate capacity of a conceptual space of a given number of dimensions, we somewhat surprisingly find that the number of distinct engrams stored in memory in the steady state is the greatest for a concept space of seven dimensions. Hence the seven senses claim."
To translate this into more accessible terms, imagine that the objects populating the real world can be described by a finite set of characteristics. These characteristics can be thought of as dimensions within a conceptual space. The researchers’ objective was to determine how to maximize the storage capacity of this conceptual space, measured by the number of distinct concepts that can be associated with these objects. A greater capacity implies a more profound and nuanced understanding of the world. The surprising revelation from their mathematical modeling is that this maximum capacity is achieved when the conceptual space possesses seven dimensions. This mathematical optimum, they conclude, points towards seven as the ideal number of sensory inputs for optimal cognitive processing.
A significant aspect of their findings is the robustness of this "seven senses" conclusion. The researchers emphasize that this number appears to be remarkably independent of the specific details of their model. Whether it’s the inherent properties of the conceptual space itself or the precise nature of the stimuli that provide sensory impressions, the number seven emerges as a persistent and unyielding feature of memory engrams. However, they do acknowledge a crucial caveat: when multiple engrams, varying in size, are clustered around a common central representation, they are considered to embody similar concepts. In such instances, they are treated as a single conceptual unit when calculating the overall memory capacity. This implies that conceptual redundancy, while natural, needs to be accounted for in assessing the true information-holding potential.
The human capacity for memory remains one of the most enigmatic and profound phenomena in science, intricately intertwined with the very essence of consciousness. As we strive to unravel the complexities of the human mind, advancements in theoretical memory models are paramount. The implications of the Skoltech study extend far beyond academic curiosity, offering a potential blueprint for the development of truly sophisticated artificial intelligence. By understanding the optimal sensory dimensionality for memory, we can pave the way for AI agents that can learn, adapt, and comprehend the world with a depth and richness hitherto unimagined, perhaps even mirroring aspects of human-like memory. The quest to understand and replicate memory, whether biological or artificial, is a journey that promises to redefine our understanding of intelligence itself. The prospect of a seven-sense cognitive architecture, whether naturally evolved or technologically engineered, opens a thrilling new chapter in the ongoing exploration of cognition.

