Mixed Selectivity in Monkey Anterior Intraparietal Area During Visual and Motor Processes
Research Report on Mixed Selectivity in Monkey Anterior Intraparietal Area During Visual and Motor Processes
Background
In recent years, the anterior intraparietal area (AIP) has garnered significant interest in the field of neuroscience. AIP is considered a convergence node for multiple visual and somatosensory information, including the physical attributes of objects and actions observed by others, as well as motor signals and higher-order information from the frontal cortex. However, the fundamental principles of such multimodal encoding remain unclear, especially regarding how AIP neurons encode information across various tasks and conditions.
The traditional view posits that neurons in AIP can be categorized into several types: motor neurons, visual neurons, canonical neurons, and mirror neurons. These neurons encode pure motor, pure visual, or both visual and motor information about objects or actions, respectively. However, most of these studies focus on single tasks, lacking an understanding of how multimodal encoding appears in AIP under diverse conditions.
Source
The paper titled “Mixed Selectivity in Monkey Anterior Intraparietal Area During Visual and Motor Processes” was authored by Monica Maranesi, Marco Lanzilotto, Edoardo Arcuri, and Luca Bonini, affiliated with the Department of Medicine and Surgery at the University of Parma, Italy. The paper was published in the journal “Progress in Neurobiology” and was available online by April 10, 2024.
Research Process
To explore the fundamental principles of multimodal encoding in AIP, the research team conducted chronic single-neuron activity recording experiments on two monkeys. These experiments involved the following tasks and conditions: 1. Visual-Motor Task (VMT): The monkeys needed to perform a go/no-go task that included grasping objects or remaining still. 2. Observation Task: The monkeys watched an experimenter performing the same tasks in both near personal space (OTP) and far personal space (OTE). 3. Video Observation Task (OTV): The monkeys watched videos showcasing goal-directed or pretend hand actions and static or dynamic videos of isolated objects.
The team analyzed the activity of 134 independent neurons in stages, with these neurons coming from recordings at different locations in the AIP of the two monkeys. Most cells exhibited mixed selectivity for the observed objects, performed actions, and observed actions, particularly when this information was from the monkey’s peripersonal workspace.
Main Results
Neuron Classification: Based on the VMT task, the neurons were classified into visuo-motor, visual-related, motor-related, and task-unrelated categories. Among these, 79 neurons were visuo-motor neurons, 26 were visual-related neurons, 13 were motor-related neurons, and 16 were task-unrelated neurons.
Mixed Selectivity: The majority (96%) of visuo-motor neurons also responded to the actions of others (whether real-time, video, or both) in the observation task. Conversely, fewer than 10% of neurons showed pure selectivity for these types of stimuli. Over 90% of neurons also modulated their firing when observing the target object or self-related visual feedback.
Behavioral Task Clustering: Calculating the Mahalanobis distance between conditions of different tasks and phases showed initial segmentation of tasks within and outside the monkey’s workspace in the baseline phase, with further segmentation for target objects becoming more apparent in subsequent visual and motor phases.
Distinction Between Actual and Pure Visual Encoding: Testing some neurons under barrier conditions showed that most visually responsive neurons were significantly affected by the barrier, indicating that these neurons reflect object-related actual encoding, while some neurons only encoded pure visual features of objects in 3D.
Conclusion
The results indicate that almost all visuo-motor neurons in the AIP area not only encode information about objects and self-actions but also respond to the actions of others. Thus, the traditional classification based on single-variable selectivity seems overly simplistic. Instead, AIP neurons exhibit various and combined selectivities, highlighting the importance of practical coding similar to that in the premotor cortex.
Notably, the activity of some AIP neurons is subject to complex, context-related modulation due to visual and motor factors. Additionally, unsupervised hierarchical cluster analysis identified significantly different neural clusters under different tasks and conditions, revealing the complex modulation characteristics of these neurons in multivariable selectivity and task conditions.
Highlights
- Novel Discovery of Mixed Selectivity: The study demonstrated that AIP neurons not only have selectivity for single types of information (like objects or actions) but also exhibit mixed selectivity.
- Distributed Encoding Mode: It showed that multimodal encoding in AIP is achieved through partial mixed selectivity of neurons across different tasks and conditions, rather than through independent encoding by specific categories of neurons.
- Challenge to Traditional Views: This research challenges the traditional method of categorizing neurons based on pure selectivity, providing a more complex and comprehensive understanding of AIP function.
Research Value
This study has important theoretical significance in the field of basic neuroscience and potential clinical applications in understanding and treating disorders related to anterior parietal functionality (such as spatial neglect and limb apraxia). Moreover, the study opens new directions for future neuroscience research, especially in terms of multivariable selectivity and distributed encoding.
Conclusion
By meticulously recording and analyzing the activity of single neurons within the monkey AIP region, this study revealed mixed selectivity in the processing of visual and motor information and their complex modulation, significantly advancing research in the field of neuroscience, particularly in the mechanisms of multimodal information integration and encoding.