An Attention-Based Deep Learning Approach for Sleep Stage Classification with Single-Channel EEG
The IEEE “Transactions on Neural Systems and Rehabilitation Engineering” published a paper titled “Sleep Stage Classification Using Attention-Based Deep Learning for Single-Channel EEG” in Volume 29, 2021. The author of the article include Emadeldeen Edele, Zhenghua Chen, Chengyu Liu, Min Wu, Chee-Keong Kwoh, Xiaoli Li, and Cuntai Guan. The main goal of this article is to introduce a novel attention-based deep learning model for automatic sleep stage classification using single-channel electroencephalogram (EEG) signals.
Research Background
Sleep is an important physiological process for humans that directly affects various aspects of daily life. Studies show that high-quality sleep can promote physical health and improve brain function, while interrupted sleep can lead to sleep disorders like insomnia or sleep apnea. Sleep stages (such as light sleep and deep sleep) play a crucial role in the immune system, memory, and metabolism. Thus, sleep monitoring and sleep stage classification become essential for measuring sleep quality.
Traditionally, sleep experts have used polysomnography (PSG) to divide sleep stages, including signals like EEG, electrooculogram (EOG), electromyogram (EMG), and electrocardiogram (ECG). In recent years, single-channel EEG has become attractive in sleep monitoring due to its convenience. Single-channel EEG signals are usually divided into 30-second segments, each manually inspected and classified by sleep experts into six stages, including wakefulness (W), rapid eye movement (REM), and four non-rapid eye movement stages (N1, N2, N3, and N4). However, this manual process is very complex, time-consuming, and laborious, highlighting the need for automated sleep stage classification systems to assist sleep experts.
Research Motivation and Sources
With the application of deep learning technology in various fields and its excellent performance, researchers have been inspired to apply it to automated sleep stage classification tasks. This study was jointly completed by scholars from Nanyang Technological University, the Agency for Science, Technology and Research (A*STAR) in Singapore, and Southeast University, and was published in the IEEE “Transactions on Neural Systems and Rehabilitation Engineering” in 2021.
Research Methods
Overall Framework
This article proposes a novel attention-based deep learning model called AttnSleep, which mainly includes three modules: feature extraction module, multi-head attention module, and classification module.
Feature Extraction Module
The feature extraction module is based on a multi-resolution convolutional neural network (MRCNN) and adaptive feature recalibration (AFR). MRCNN uses two convolutional layers with different kernel sizes to extract high-frequency and low-frequency features. The AFR module models the interdependencies between these features through residual squeeze and excitation (Residual SE) blocks, further optimizing feature extraction.
Multi-head Attention Module
In the multi-head attention module, causal convolution is applied to capture the temporal dependencies of the input features. The multi-head attention mechanism allows the model to focus on the importance of different parts of the input features and can process them in parallel, significantly reducing training time.
Data and Experimental Setup
The study used three public datasets (Sleep-EDF-20, Sleep-EDF-78, and Sleep Heart Health Study (SHHS)), each containing single-channel EEG signals. The experiments evaluated the model performance using overall accuracy (ACC), macro F1 score (MF1), Cohen’s Kappa coefficient (κ), and macro G-Mean. Additionally, the study also evaluated the performance using classification-specific precision (PR), recall (RE), F1 score (F1), and G-Mean.
Experimental Results
Classification Accuracy
The study results show that the AttnSleep model outperformed existing state-of-the-art techniques on all three datasets, especially in handling data imbalance issues. The study used 20-fold cross-validation and provided detailed analysis through confusion matrices.
Model Performance
In the Sleep-EDF-20, Sleep-EDF-78, and SHHS datasets, the performance of stage N1 was the worst, with F1 scores below 50%, while stage N3 performed the best. Misclassification for different datasets was mainly concentrated in stage N2, as N2 is the main category in multiple datasets.
Conclusion
The AttnSleep model proposed in this paper performs excellently in the task of automatic sleep stage classification, surpassing existing state-of-the-art techniques with excellent feature extraction capability and high efficiency in handling temporal dependencies. The study results indicate that the AttnSleep model consistently maintains stable and efficient classification performance when handling different public datasets, providing new ideas and methods for future applications in the field of sleep monitoring.
In the future, the research plans to explore how to utilize transfer learning and domain adaptation techniques to make models trained on labeled datasets adapt to other unlabeled sleep dataset classification tasks.
Research Value
This study not only demonstrates significant scientific and algorithmic innovation but also has important practical value. The proposed novel deep learning architecture does not require complex feature engineering and can automatically extract and recalibrate features, greatly simplifying the sleep stage classification process. Furthermore, the design of the multi-head attention mechanism and adaptive feature recalibration module offers new solutions for time-series classification problems based on deep learning. These pioneering works will drive the automation and intelligence of sleep monitoring, reducing the clinical workload and allowing more people to benefit from accurate sleep quality monitoring.