Representation of Internal Speech by Single Neurons in Human Supramarginal Gyrus
“Internal Speech Representation by Single Neurons in Human Supramarginal Gyrus” Scientific Report
Background
In recent years, Brain-Machine Interfaces (BMIs) technology has made significant advancements in the field of speech decoding. BMIs enable those who have lost the ability to speak due to disease or injury to communicate again by converting brain signals into speech or audio output. However, despite significant advancements in vocal speech, attempted speech, and simulated speech decoding, research on internal speech decoding is relatively scarce and underdeveloped. This paper aims to address the challenges inherent in the decoding of internal speech, particularly determining from which brain regions internal speech can be decoded. The focus of the study is on neural signals in the Supramarginal Gyrus (SMG) and the Primary Somatosensory Cortex (S1).
Paper Source
This study was conducted by researchers Sarah K. Wandelt, David A. Bjånes, Kelsie Pejsa, Brian Lee, Charles Liu, and Richard A. Andersen, who are all affiliated with the California Institute of Technology, Rancho Los Amigos National Rehabilitation Center, and the Keck School of Medicine of USC, amongst others. This paper was published in “Nature Human Behaviour” in June 2024, the DOI for the article is https://doi.org/10.1038/s41562-024-01867-y.
Research Details
a) Research Methodology
The research includes the following crucial steps:
Subject selection and device implantation: Two quadriplegic participants were involved, both of whom had micro-electrode arrays implanted in the SMG and S1.
Task design: The two participants performed tasks involving internal speech and vocal speech, including six words and two pseudowords. The tasks comprised six stages: intertrial interval (ITI), cue phase, first delay (D1), internal speech phase, second delay (D2), and vocal speech phase. Words were presented using either auditory or written cues, and participants were required to verbalize these words, either internally or vocally, at different stages.
Data collection and analysis: Neural activity in the SMG and S1 was recorded using the micro-electrode arrays, and data was analyzed and classified using linear regression and decision tree algorithms. Several decoding algorithms were selected, such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA).
b) Major Results
Neuronal Representation: Significant neuronal representation of internal and vocal speech was found at the single neuronal and neuronal population level in the SMG. The recorded population activity could significantly decode both internal and vocal words.
Decoding Accuracy: In offline analyses, the average internal speech decoding accuracies for participants were 55% and 24% (chance level 12.5%), while in online internal speech BMI tasks, accuracies were 79% and 23%, respectively.
Shared Neuronal Representation: In Participant 1, shared neuronal representation was found among internal speech, word reading, and vocal speech processes. The SMG not only represented words but also pseudowords, providing evidence of its role in speech encoding.
S1’s Relationship with Internal and Vocal Speech: In both participants, S1 activity only changed during vocal speech processes, indicating no orofacial movements during the production of internal speech.
c) Conclusions & Value
The study suggests the SMG could serve as a potential brain region for high-performance internal speech BMI. The scientific value of this study lies in revealing the crucial role of SMG in the decoding of internal and vocal speech, providing a new direction for future speech BMI systems. In terms of application value, the research outcome could provide technical support for the restoration of communication abilities in patients who have lost the ability to speak, granting it significant clinical importance.
d) Highlights of the Study
High-performance internal speech decoding: This is the first study to demonstrate high-performance internal speech decoding in the SMG, with an online decoding accuracy of up to 79%.
Shared neuronal representation: The study reveals a shared neuronal representation among internal speech, word reading, and vocal speech, indicating the core role of the SMG in various language processing procedures.
Independent encoding of internal and vocal speech: The role of S1 in vocal speech decoding was confirmed, but it was not involved in internal speech processes.
Other Valuable Information
Decodability of Different Internal Speech Strategies: In Participant 1, high-level decoding was achieved using auditory and visual imagination strategies, suggesting that internal speech BMIs can adapt to multiple internal speech strategies.
Pseudoword Encoding: The study found that the SMG can efficiently represent pseudowords, providing evidence of its role in speech encoding.
Flexible Decoding Model: The decoding model developed in this paper demonstrated high accuracy at different task stages and cue types, showing the model’s robustness and adaptability.
Conclusion
This study serves as a crucial breakthrough in the field of internal speech BMI by demonstrating the feasibility and high accuracy of decoding internal speech from the SMG. The research not only reveals the potential of the SMG in internal speech decoding but also validates its core function in various language processing procedures. This finding has substantial implications for scientific research and offers new insights for clinical applications, bringing new hope for those who have lost the ability to speak.