October 14, 2024

Brain’s Intuitive Understanding of the World Mirrors Computational Models

Researchers from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT have found evidence suggesting that the brain may develop an intuitive understanding of the physical world through a process similar to self-supervised learning in computational models. Self-supervised learning is a type of machine learning that enables models to learn about visual scenes based solely on their similarities and differences, without any explicit labels or information.

In their studies, the researchers trained neural networks using self-supervised learning and observed that the resulting models generated activity patterns similar to those seen in the brains of animals performing the same tasks. This indicates that these models can learn representations of the physical world, enabling them to make accurate predictions about what will happen in that world. The researchers propose that the mammalian brain may employ a similar strategy.

The lead author of one of the studies, Aran Nayebi, explains that the aim of their work is not only to build better robots but also to gain a deeper understanding of the brain. Although the researchers cannot yet determine if this organizing principle applies to the entire brain, their results indicate its existence across scales and different brain areas.

Traditionally, computer vision models relied on supervised learning, where images are labeled with corresponding names for classification. However, this approach requires a large amount of human-labeled data. In recent years, researchers have turned to contrastive self-supervised learning, a more efficient alternative that allows models to learn to classify objects based on the similarity between them, without external labels.

Nayebi explains that this self-supervised learning method leverages large-scale datasets, especially videos, to unlock their potential. Many of the modern AI systems, such as ChatGPT and GPT-4, have been trained using this technique, resulting in highly flexible representations.

Neural networks, also known as models, consist of interconnected processing units. As the network analyzes vast amounts of data, the strengths of connections between units change, enabling the network to learn and perform tasks. The activity patterns of different units within the network can be measured, with each unit’s activity representing a firing pattern similar to that of neurons in the brain. Previous research has shown that self-supervised models of vision generate activity patterns similar to those in the visual processing system of mammalian brains.

In their studies presented at the NeurIPS conference, the MIT researchers aimed to determine if self-supervised computational models of other cognitive functions would exhibit similarities to the mammalian brain. In one study, the researchers trained models to predict the future state of their environment using self-supervised learning and then evaluated their performance in a task called Mental-Pong. The results showed that the model accurately tracked the trajectory of a hidden ball, similar to neural activity in the mammalian brain.

The second study focused on grid cells, which are specialized neurons involved in navigation. Grid cells, along with place cells located in the hippocampus, help animals encode their spatial position. The researchers trained a self-supervised model to perform path integration, a task that predicts an animal’s next location based on its starting point and velocity. The model successfully represented space efficiently and exhibited activation patterns similar to those observed in grid cells.

The researchers believe that their findings have implications beyond building better robots. By predicting neural data, these computational models bring us closer to emulating natural intelligence. This connection between AI models and neurobiology is essential for understanding the inner workings of the brain and developing artificial systems that mimic natural intelligence.

The studies were supported by various organizations, including the K. Lisa Yang ICoN Center, the National Institutes of Health, and the Simons Foundation.

Note:

  1. Source: Coherent Market Insights, Public sources, Desk research
  2. We have leveraged AI tools to mine information and compile it

 

Money Singh

Money Singh is a seasoned content writer with over four years of experience in the market research sector. Her expertise spans various industries, including food and beverages, biotechnology, chemical and materials, defense and aerospace, consumer goods, etc.

Money Singh

Money Singh is a seasoned content writer with over four years of experience in the market research sector. Her expertise spans various industries, including food and beverages, biotechnology, chemical and materials, defense and aerospace, consumer goods, etc.

View all posts by Money Singh →