The Neural Mechanism of Knowledge Assembly in the Human Brain Inspires Artificial Intelligence Algorithm

Neural Mechanisms of Knowledge Assembly

Brain Science Research Inspires AI Algorithms: Neural Mechanisms of Knowledge Assembly

Background Introduction

When new information enters the brain, human pre-existing knowledge of the world can quickly change through a process called “knowledge assembly.” Recently, in a study conducted by Nelli et al., the neural correlates of knowledge assembly in the human brain were explored. Inspired by this neural mechanism, researchers developed an artificial neural network algorithm to achieve rapid knowledge assembly and improve system flexibility. This research once again demonstrates that studying the brain’s working methods can drive the development of better computational algorithms.

Research Source

This research paper was authored by Xiang Ji, Wentao Jiang, Xiaoru Zhang, Ming Song, Shan Yu, and Tianzi Jiang. The authors are primarily from the Center for Excellence in Brain Science and Intelligence Technology, the Brainnetome Center and Laboratory at the Institute of Automation, Chinese Academy of Sciences, and the Center for Augmented Intelligence at Zhejiang Laboratory. The paper was received on September 6, 2023, accepted on October 15, 2023, and published online on November 4, 2023. It appears in the February 2024 issue of Neuroscience Bulletin, Vol. 40, No. 2.

Research Process

Nelli et al. designed an ingenious experimental paradigm to study the knowledge assembly process in the human brain. The experiment included the following steps:

  1. Knowledge Training: Firstly, the study randomly divided 12 items into two groups, and participants were trained to learn the order of items within each group. This formed two independent sequences of order—”old knowledge”—in the participants’ brains.

  2. Introduction of New Information: Subsequently, participants were informed of new information, that is, the items in one group were ordered lower than the items in the other group. This new information triggered knowledge assembly in the participants’ brains.

  3. Knowledge Assembly Synchronization: Finally, the “old knowledge” was assembled into a new sequence that included all 12 items. During this period, whole-brain signals were recorded using functional magnetic resonance imaging (fMRI). The researchers analyzed fMRI signals through a whole-brain searchlight approach and found that the frontoparietal network, particularly areas in the posterior parietal cortex (PPC) and dorsomedial prefrontal cortex (DMPFC), were key regions related to knowledge assembly.

In PPC and DMPFC, the certainty of item order was defined as the difference in fMRI activation signals. By calculating the differences for all 12 items, a representation difference matrix (RDM) was obtained, visualizing the neural representation geometry in PPC and DMPFC.

Main Research Results

By comparing the neural representation geometry formed in the cortex from old knowledge and new knowledge, researchers found that with the addition of new information, the original neural representation geometry rapidly assembled into a new structure. After revealing the neural mechanisms of knowledge assembly, researchers demonstrated the failure of artificial neural networks in addressing this few-sample knowledge assembly problem. Thus, the authors proposed an improved neural network model by introducing a “certainty” parameter matrix into the weight update process of the network. The certainty matrix maintains the internal neural geometric structure of the two sets of information. Once new information is added, the neural geometric structures of each set shift and assemble into a new neural geometric structure, thus achieving knowledge assembly in the artificial neural network.

Research Conclusion

This study is of great importance in the field of cognitive neuroscience and highly inspiring for the development of artificial intelligence (AI). In cognitive neuroscience, researchers explored cortical regions related to knowledge assembly and uncovered the neural mechanisms of knowledge assembly in the human brain. In PPC and DMPFC, due to the addition of new information, the original neural representation geometry rapidly assembled into a new geometric structure. Additionally, although various AI algorithms—such as graph-based architectures, modular networks, probabilistic programs, and deep generative models—have been proposed to build rich conceptual knowledge structures, they often learn slowly and require extensive supervision, making fast knowledge assembly difficult when faced with minimal new information.

Inspired by the knowledge assembly process in the human brain, Nelli et al. improved the stochastic gradient descent (SGD) algorithm and proposed a simple two-layer neural network to achieve rapid knowledge assembly, requiring only 20 training trials to learn new information. The authors drew inspiration from human brain cognitive mechanisms and successfully applied it to the improvement of artificial neural networks.

Research Highlights

Compared to AI, the human brain exhibits stronger reasoning ability, higher flexibility, and adaptability. Currently, to develop more advanced AI systems, brain-inspired computing not only draws inspiration from low-level perceptual functions (such as visual and auditory processing) but also from higher-level cognitive functions (such as language and memory). By incorporating neural computation principles into artificial neural networks, AI can exhibit some human intelligence characteristics at perceptual and cognitive levels, thereby achieving higher performance.

Future Development

In the future, brain-inspired algorithms should make more progress in the following areas:

  1. More Flexible AI Systems: As neuroscience continues to reveal new neural mechanisms, integrating these mechanisms into AI systems will further enhance their flexibility, enabling human-like behavior.

  2. Development of Spiking Neural Networks (SNNs): Spiking neural networks are more biologically plausible than artificial neural networks, but their current performance lags behind artificial neural networks, necessitating further research and improvement.

  3. Fusion of Multimodal Inputs: The human brain can process multiple sensory inputs (such as visual and auditory stimuli) and integrate them to form an overall understanding of the environment. Therefore, improving multimodal input fusion capability is a promising direction for brain-inspired computing.

By studying the working mechanisms of the human brain, researchers not only enrich theories in cognitive neuroscience but also promote advances in AI algorithms. The future of brain-inspired computing is bound to achieve even greater accomplishments.