Datasets
Standard Dataset
Heterogeneous Datasets for TinyUStaging
- Citation Author(s):
- Submitted by:
- Jingyi Lu
- Last updated:
- Thu, 09/22/2022 - 23:30
- DOI:
- 10.21227/vg0p-hn48
- Data Format:
- License:
- Categories:
- Keywords:
Abstract
Nowadays, more and more machine learning models have emerged in the field of sleep staging. However, they have not been widely used in practical situations, which may be due to the non-comprehensiveness of these models' clinical and subject background and the lack of persuasiveness and guarantee of generalization performance outside the given datasets. Meanwhile, polysomnogram (PSG), as the gold standard of sleep staging, is rather intrusive and expensive. In this paper, we propose a novel automatic sleep staging architecture called TinyUStaging using single-lead EEG and EOG. The TinyUStaging is an efficient U-Net with multiple attention modules, including Channel and Special Joint Attention (CSJA) block and Squeeze and Excitation (SE) block. Besides, we design sampling strategies and propose a class-aware Sparse Weighted Dice and Focal (SWDF) loss function. The results show that it significantly improves the recognition rate for minority classes and hard samples such as N1 sleep. Noteworthily, we select seven highly heterogeneous datasets covering 9,970 records with above 2w hours among 7,226 subjects spanning 950 days for training, validation and evaluation. Additionally, two hold-out sets containing healthy and sleep-disordered subjects are considered to verify the model's generalization. The results demonstrate that our model outperforms state-of-the-art methods, achieving an average overall accuracy, macro F1-score and kappa of 84.62%, 0.796, 0.764 on heterogeneous datasets, providing a solid foundation for out-of-hospital sleep monitoring.
<!doctype html>README
TinyUStaging
TinyUStaging: An Efficient Model for Sleep Staging with Single-Channel EEG and EOG
- See HD figure in
supplementary_materials\Figures\FigS1.tif
Codes
-
The source code will be made public after the paper is accepted.
- The model framework has been open sourced, see
bin/defaults
- The model framework has been open sourced, see
-
Welcome to submit issues to promote my work!
Workflow
-
Schematic illustration of our TinyUStaging workflow
- Data. We selected any combination of 'EEG+EOG'.
- Cross-validation. we applied 5-fold subject-wise cross-validation on seven datasets totaling above 750G, in each fold, 75%, 10%, and 15% of the data were utilized to train, validate and evaluate the model.
- Pre-Processing. We conducted data scaling and data enhancement methods to make the model more robust.
- Model. We trained a 4-layer U-Net including Encoder, Decoder and Random Window Classifier (RWC) with SE and CSJA module.
- Predict. We output the confidence scores of each class in the entire RWC.
- Evaluate. TinyUStaging use metrics including per-class metrics and overall metrics (accuracy, precision, recall, F1-Score, Cohen’s kappa).
Model Architecture
Overall
CSJA block
SE block
Results
Subject-wise
-
See
supplementary_materials\paper_plot
-
e.g. Case tpf435cf71_2574_49b2_bad0_5feceaa69d23 in DCSM
- See
supplementary_materials\paper_plot\pro44\test_data\dcsm\plots
- See
-
e.g. Case SC4191E0 in Sleep-EDF
Dataset-wise
- This will be made public after the paper is accepted.
All Test Sets
-
Results with seven highly heterogeneous
- For more visualization results, see our paper and project folders
supplementary_materials
- For more visualization results, see our paper and project folders
Dataset Files
- hparams.yaml (5.44 kB)
- ustaging4_ljy_cbam2se_tiny.zip (5.65 kB)
Comments
For more details: https://github.com/ljyljy/TinyUStaging