Deep Learning-Based Assessment Model for Real-Time Identification of Visual Learners Using Raw EEG
In the current educational environment, understanding students’ learning styles is crucial for improving their learning efficiency. Specifically, the identification of visual learning styles can help teachers and students adopt more effective strategies in the teaching and learning process. Currently, automatic identification of visual learning styles relies primarily on Electroencephalogram (EEG) and machine learning techniques. However, these techniques often require offline processing to eliminate artifacts and extract features, thereby limiting their applicability in real-time applications.
This study, conducted by Soyiba Jawed, Ibrahima Faye, and Aamir Saeed Malik, published in the “IEEE Transactions on Neural Systems and Rehabilitation Engineering” in 2024, proposes a real-time visual learner identification model based on deep learning techniques, aiming to overcome the limitations of traditional methods.
Research Background
Why Was This Study Conducted?
Learning styles play a vital role in the acquisition of knowledge and skills. Therefore, identifying and understanding learning styles is an important part of optimizing teaching methods. Currently available computer-aided systems can use EEG combined with machine learning algorithms to assess learning styles. However, these systems mainly rely on offline feature extraction and processing, which cannot meet the needs of real-time feedback.
Related Background Knowledge
Learning styles are generally divided into four categories: visual, kinesthetic, auditory, and tactile. Statistics indicate that about 70% of students are visual learners, who learn through visual effects such as pictures, videos, and presentations. Additionally, learning styles are related to the neural dynamics of the brain, so studying EEG can provide an objective basis for identifying learning styles.
Source Information
This article was written by the following authors:
- Soyiba Jawed, affiliated with the Faculty of Information Technology at Brno University of Technology in the Czech Republic and the Department of Computer and Software Engineering at the School of Electrical and Mechanical Engineering, National University of Sciences and Technology, Islamabad, Pakistan.
- Ibrahima Faye, affiliated with the Department of Fundamental and Applied Sciences at Universiti Teknologi Petronas, Malaysia.
- Aamir Saeed Malik, affiliated with the Department of Computer Systems at the Faculty of Information Technology, Brno University of Technology.
The study was published in the “IEEE Transactions on Neural Systems and Rehabilitation Engineering” in 2024. The research was funded by the Czech Science Foundation project, and all ethical and experimental procedures were approved by the Human Research Ethics Committees of Universiti Teknologi Petronas and Universiti Sains Malaysia.
Research Methodology
Data Collection and Subjects
The study selected 34 healthy subjects, whose EEG signals were measured during rest states (eyes open and closed) and while performing learning tasks. The subjects’ ages ranged from 18 to 30 years, with an average age of 23.17 years ± 3.04 years. All subjects had normal or corrected-to-normal vision, no neurological disorders or hearing impairments, and were not on any medication.
Experimental Tasks
Subjects were required to perform two main tasks: a learning task and a memory task.
- Learning Task: Consisted of 8 to 10 minutes of animated human anatomy content, with subjects having no prior knowledge of the content.
- Memory Task: Consisted of 20 multiple-choice questions related to the learning content, with 30 seconds allocated per question.
EEG Recording and Data Processing
EEG signals were continuously recorded using the EGI EEG device, employing a 128-channel Hydrocel Geodesic sensor network. Signals were amplified and sampled at a rate of 250 Hz.
Deep Learning Algorithms
The study used three deep learning techniques for classification: Long Short-Term Memory networks (LSTM), Convolutional Neural Networks (CNN), and Fully Convolutional Neural Networks (FCNN). The specific process was as follows:
- LSTM Model: Used for processing sequential data, capable of capturing long dependencies in time series.
- LSTM-CNN Model: Combines the sequential feature extraction ability of LSTM with the local feature extraction capability of CNN, achieving higher classification accuracy.
- LSTM-FCNN Model: Connects a fully convolutional network after the LSTM layer to further extract and classify features.
Data Set Partitioning
The data set was divided into training, validation, and test sets. During training, network parameters were optimized using the ADAM optimizer, and early stopping was employed to prevent overfitting and underfitting.
Main Research Results
Model Performance Evaluation
- LSTM Model: Average accuracy of 96%, sensitivity of 90%, specificity of 95%.
- LSTM-CNN Model: Average accuracy of 94%, sensitivity of 80%, specificity of 92%, F1 score of 94%.
- LSTM-FCNN Model: Average accuracy of 89%, sensitivity of 84%, specificity of 90%.
Result Analysis and Comparison
The results show that the classification accuracy of the LSTM and LSTM-CNN models is similar, but the LSTM-CNN model has a shorter computation time in real-time applications, making it more suitable for real-time use. Compared with traditional handcrafted feature extraction methods, deep learning methods significantly improve classification accuracy and eliminate the need for cumbersome preprocessing steps.
Conclusion
Research Significance and Value
This study demonstrates the effectiveness of deep learning techniques based on raw EEG data for real-time identification of visual learners. Compared with traditional machine learning methods, deep learning methods not only have advantages in classification accuracy but also enable real-time processing, thereby greatly improving the efficiency of teaching and learning.
Research Highlights
- High Accuracy Real-Time Identification: Achieved 94% classification accuracy with the optimized LSTM-CNN model.
- No Need for Offline Processing: Raw EEG data is directly input into the model, with no need for handcrafted feature extraction and preprocessing, suitable for practical applications.
- Innovative Deep Learning Approach: The proposed model combines the strengths of LSTM and CNN, capable of capturing more complex features in EEG signals.
Future Research Directions
Despite the significant achievements of this research, further studies are needed to improve the generalization ability of the model and explore its potential application in identifying other learning styles. This research provides a new method based on deep learning for the education field, capable of real-time, accurate identification of visual learners, displaying important scientific value and practical significance.