Sleep staging from ECG, assisted with EEG

Overview

Sleep_Staging_Knowledge Distillation

This codebase implements knowledge distillation approach for ECG based sleep staging assisted by EEG based sleep staging model. Knowledge distillation is incorporated here by softmax distillation and another approach by Attention transfer based feature training. The combination of both is the proposed model.

The code implementation is done with Pytorch-lightning framework. Dependencies can be found in requirements.txt

RESEARCH

DATASET

Montreal Archive of Sleep Studies (MASS) - Complete 200 subject data used.

  • SS1 and SS3 subsets follow AASM guidelines
  • SS2, SS4, SS5 subsets follow R_K guidelines

KNOWLEDGE DISTILLATION FRAMEWORK

Knowledge distillation framework using minor modifications in U-Time as base model.

Improvement in bottleneck features from ECG_Base model to KD_model as a result of Knowledge distillation compared to EEG_base model features.

Case 1 : KD_model predicting correctly, ECG_Base predicting incorrectly

Case 2 : KD_model predicting incorrectly, ECG_Base predicting correctly

Run Training

Run train.py from 3-class or 4-class directories

To train baseline models

  python train.py --model_type <"base model type"> --model_ckpt_name <"ckpt name">

To run Knowledge Distillation

  • Feature Training
  python train.py --model_type "feat_train" --model_ckpt_name <"ckpt name"> --eeg_baseline_path <"eeg base ckpt path">
  • Feat_Temp (AT+SD+CL)
  python train.py --model_type "Feat_Temp" --model_ckpt_name <"ckpt name"> --feat_path <"path to feature trained ckpt">
  • Feat_WCE (AT+CL)
  python train.py --model_type "feat_wce" --model_ckpt_name <"ckpt name"> --feat_path <"path to feature trained ckpt">
  • KD-Temp (SD+CL)
  python train.py --model_type "kd_temp" --model_ckpt_name <"ckpt name"> --eeg_baseline_path <"eeg base ckpt path">

Run Testing

Run test.py from 3-class or 4-class directories

To test from checkpoints

  python test.py --model_type <"model type"> --test_ckpt <"Path to checkpoint>

Other arguments can be used for training and testing as per requirements

Reproducing experiments

Checkpoints to reproduce the test results can be found in this link

Directory Map

Dataset Spliting:

Splits Data in train-val-test for 4-class and 3-class cases (AASM and R_K both)

├─ Dataset_split
   ├── Data_split_3class_AllData30s_R_K.py
   ├── Data_split_3class_AllData_AASM.py
   ├── Data_split_AllData_30s_R_K.py
   └── Data_split_All_Data_AASM.py

3 Class Classification:

Run train.py with neccessary arguments for training 3-class sleep staging

├── 3_class
│   ├── datasets
│   │   ├── __init__.py
│   │   └── mass.py
│   │   
│   ├── models
│   │   ├── __init__.py
│   │   ├── ecg_base.py
│   │   ├── eeg_base.py
│   │   ├── FEAT_TEMP.py
│   │   ├── FEAT_TRAINING.py
│   │   ├── FEAT_WCE.py
│   │   └── KD_TEMP.py
│   │   
│   ├── test.py
│   ├── train.py
│   └── utils
│       ├── __init__.py
│       ├── arg_utils.py
│       ├── callback_utils.py
│       ├── dataset_utils.py
│       └── model_utils.py

4 Class Classification:

Run train.py with neccessary arguments for training 4-class sleep staging

├── 4_class
│   ├── datasets
│   │   ├── __init__.py
│   │   └── mass.py
│   │
│   ├── models
│   │   ├── __init__.py
│   │   ├── ecg_base.py
│   │   ├── eeg_base.py
│   │   ├── FEAT_TEMP.py
│   │   ├── FEAT_TRAINING.py
│   │   ├── FEAT_WCE.py
│   │   └── KD_TEMP.py
│   │   
│   ├── test.py
│   ├── train.py
│   └── utils
│       ├── __init__.py
│       ├── arg_utils.py
│       ├── callback_utils.py
│       ├── dataset_utils.py
│       └── model_utils.py

Acknowledgements

Authors

Pyeventbus: a publish/subscribe event bus

pyeventbus pyeventbus is a publish/subscribe event bus for Python 2.7. simplifies the communication between python classes decouples event senders and

15 Apr 21, 2022
This repo is the code release of EMNLP 2021 conference paper "Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories".

Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories This repo is the code release of EMNLP 2021 con

12 Nov 22, 2022
Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021)

HAIS Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021) by Shaoyu Chen, Jiemin Fang, Qian Zhang, Wenyu Liu, Xinggang Wang*. (*) Corresp

Hust Visual Learning Team 145 Jan 05, 2023
RSC-Net: 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos

RSC-Net: 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos Implementation for "3D Human Pose, Shape and Texture from Low-Resoluti

XiangyuXu 42 Nov 10, 2022
基于tensorflow 2.x的图片识别工具集

Classification.tf2 基于tensorflow 2.x的图片识别工具集 功能 粗粒度场景图片分类 细粒度场景图片分类 其他场景图片分类 模型部署 tensorflow serving本地推理和docker部署 tensorRT onnx ... 数据集 https://hyper.a

Wei Qi 1 Nov 03, 2021
Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers

DALLE2 Video (wip) ** only to be built after DALLE2 image is done and replicated, and the importance of the prior network is validated ** Direct appli

Phil Wang 105 May 15, 2022
CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution

CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution This is the official implementation code of the paper "CondLaneNe

Alibaba Cloud 311 Dec 30, 2022
An end-to-end machine learning library to directly optimize AUC loss

LibAUC An end-to-end machine learning library for AUC optimization. Why LibAUC? Deep AUC Maximization (DAM) is a paradigm for learning a deep neural n

Andrew 75 Dec 12, 2022
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022
hySLAM is a hybrid SLAM/SfM system designed for mapping

HySLAM Overview hySLAM is a hybrid SLAM/SfM system designed for mapping. The system is based on ORB-SLAM2 with some modifications and refactoring. Raú

Brian Hopkinson 15 Oct 10, 2022
Official PyTorch Implementation of "Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs". NeurIPS 2020.

Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs This repository is the implementation of SELAR. Dasol Hwang* , Jinyoung Pa

MLV Lab (Machine Learning and Vision Lab at Korea University) 48 Nov 09, 2022
Keras Image Embeddings using Contrastive Loss

Keras-Image-Embeddings-using-Contrastive-Loss Image to Embedding projection in vector space. Implementation in keras and tensorflow for custom data. B

Shravan Anand K 5 Mar 21, 2022
Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

Robin Jia 38 Oct 16, 2022
[ICCV2021] 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds

3DVG-Transformer This repository is for the ICCV 2021 paper "3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds" Our method "3DV

22 Dec 11, 2022
TriMap: Large-scale Dimensionality Reduction Using Triplets

TriMap TriMap is a dimensionality reduction method that uses triplet constraints to form a low-dimensional embedding of a set of points. The triplet c

Ehsan Amid 235 Dec 24, 2022
Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC)

ppg-vc Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC) This repo implements different kinds of PPG-based VC models. Pretrained models. More m

Liu Songxiang 227 Dec 28, 2022
ROS support for Velodyne 3D LIDARs

Overview Velodyne1 is a collection of ROS2 packages supporting Velodyne high definition 3D LIDARs3. Warning: The master branch normally contains code

ROS device drivers 543 Dec 30, 2022
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners

Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners This repository is built upon BEiT, thanks very much! Now, we on

Zhiliang Peng 2.3k Jan 04, 2023
"SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image", Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi, Zhangyang Wang

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image [Paper] [Website] Pipeline Code Environment pip install -r requirements

VITA 250 Jan 05, 2023
Event-forecasting - Event Forecasting Algorithms With Python

event-forecasting Event Forecasting Algorithms Theory Correlating events in comp

Intellia ICT 4 Feb 15, 2022