Deep Learning Emotion decoding using EEG data from Autism individuals

Overview

Deep Learning Emotion decoding using EEG data from Autism individuals

This repository includes the python and matlab codes using for processing EEG 2D images on a customized Convolutional Neural Network (CNN) to decode emotion visual stimuli on individuals with and without Autism Spectrum Disorder (ASD).

If you would like to use this repository to replicate our experiments with this data or use your our own data, please cite the following paper, more details about this code and implementation are described there as well:

Mayor Torres, J.M. ¥, Clarkson, T.¥, Hauschild, K.M., Luhmann, C.C., Lerner, M.D., Riccardi, G., Facial emotions are accurately encoded in the brains of those with autism: A deep learning approach. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging,(2021).

Requirements

  • Tensorflow >= v1.20
  • sklearn
  • subprocess
  • numpy
  • csv
  • Matlab > R2018b

For the python code we provide:

1. A baseline code to evaluate a Leave-One-Trial-Out cross-validation from two csv files. One including all the trials for train with their corresponding labels and other with the test features of the single trial you want to evaluate. The test and train datafile should have an identifier to be paired by the for loop used for the cross validation. The code to run the baseline classifiier is located on the folder classifier_EEG_call.

Pipeline for EEG Emotion Decoding

To run the classifier pipeline simply download the .py files on the folder classifier_EEG_call and execute the following command on your bash prompt:

   python LOTO_lauch_emotions_test.py "data_path_file_including_train_test_files"

Please be sure your .csv files has a flattened time-points x channels EEG image after you remove artifacts and noise from the signal. Using the ADJUST EEGlab pipeline preferrably (https://sites.google.com/a/unitn.it/marcobuiatti/home/software/adjust).

The final results will be produced in a txt file in the output folder of your choice. Some metrics obtained from a sample of 88 ADOS-2 diagnosed participants 48 controls, and 40 ASD are the following:

Metrics/Groups FER CNN
Acc Pre Re F1 Acc Pre Re F1
TD 0.813 0.808 0.802 0.807 0.860 0.864 0.860 0.862
ASD* 0.776 0.774 0.768 0.771 0.934 0.935 0.933 0.934

Face Emotion Recognition (FER) task performance is denoted as the human performance obtained when labeling the same stimuli presented to obtain the EEG activity.

2. A code for using the package the iNNvestigate package (https://github.com/albermax/innvestigate) Saliency Maps and unify them from the LOTO crossvalidation mentioned in the first item. Code is located in the folder iNNvestigate_evaluation

To run the investigate evaluation simply download the .py files on the folder iNNvestigate_evaluation and execute the following command on your bash prompt:

   python LOTO_lauch_emotions_test_innvestigate.py "data_path_file_including_train_test_files" num_method

The value num_method is defined based on the order iNNvestigate package process saliency maps. For our specific case the number concordance is:

'Original Image'-> 0 'Gradient' -> 1 'SmoothGrad'-> 2 'DeconvNet' -> 3 'GuidedBackprop' -> 4 'PatterNet' -> 5 'PatternAttribution' -> 6 'DeepTaylor' -> 7 'Input * Gradient' -> 8 'Integrated Gradients' -> 9 'LRP-epsilon' -> 10 'LRP-Z' -> 11 'LRP-APresetflat' -> 12 'LRP-BPresetflat' -> 13

An example from saliency maps obtained from LRP-B preset are shown below ->

significant differences are observed on 750-1250 ms relative to the onset between the relevance of Controls and ASD groups!

alt text alt text alt text

For the Matlab code we provide the repository for reading the resulting output performance files for the CNN baseline classifier Reading_CNN_performances, and for the iNNvestigate methods using the same command call due to the output file is composed of the same syntax.

To run a performance checking first download the files on Reading_CNN_performances folder and run the following command on your Matlab prompt sign having the results the .csv files on a folder of your choice.

   read_perf_convnets_subjects('suffix_file','performance_data_path')
Owner
Juan Manuel Mayor Torres
I'm Research Associate in Cardiff University, UK. I'm interested in characterizing behavioral/neural outcome measures on neural representations using ML
Juan Manuel Mayor Torres
Code for ICCV2021 paper SPEC: Seeing People in the Wild with an Estimated Camera

SPEC: Seeing People in the Wild with an Estimated Camera [ICCV 2021] SPEC: Seeing People in the Wild with an Estimated Camera, Muhammed Kocabas, Chun-

Muhammed Kocabas 187 Dec 26, 2022
Official code release for: EditGAN: High-Precision Semantic Image Editing

Official code release for: EditGAN: High-Precision Semantic Image Editing

565 Jan 05, 2023
Source code for TACL paper "KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation".

KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation Source code for TACL 2021 paper KEPLER: A Unified Model for Kn

THU-KEG 138 Dec 22, 2022
A collection of metrics for evaluating timbre dissimilarity using the TorchMetrics API

Timbre Dissimilarity Metrics A collection of metrics for evaluating timbre dissimilarity using the TorchMetrics API Installation pip install -e . Usag

Ben Hayes 21 Jan 05, 2022
Codebase for ECCV18 "The Sound of Pixels"

Sound-of-Pixels Codebase for ECCV18 "The Sound of Pixels". *This repository is under construction, but the core parts are already there. Environment T

Hang Zhao 318 Dec 20, 2022
A toolkit for document-level event extraction, containing some SOTA model implementations

❤️ A Toolkit for Document-level Event Extraction with & without Triggers Hi, there 👋 . Thanks for your stay in this repo. This project aims at buildi

Tong Zhu(朱桐) 159 Dec 22, 2022
Playable Video Generation

Playable Video Generation Playable Video Generation Willi Menapace, Stéphane Lathuilière, Sergey Tulyakov, Aliaksandr Siarohin, Elisa Ricci Paper: ArX

Willi Menapace 136 Dec 31, 2022
Making a music video with Wav2CLIP and VQGAN-CLIP

music2video Overview A repo for making a music video with Wav2CLIP and VQGAN-CLIP. The base code was derived from VQGAN-CLIP The CLIP embedding for au

Joel Jang | 장요엘 163 Dec 26, 2022
Implementation of Axial attention - attending to multi-dimensional data efficiently

Axial Attention Implementation of Axial attention in Pytorch. A simple but powerful technique to attend to multi-dimensional data efficiently. It has

Phil Wang 250 Dec 25, 2022
Implementation of ConvMixer in TensorFlow and Keras

ConvMixer ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on

Sayan Nath 8 Oct 03, 2022
Code accompanying the paper "ProxyFL: Decentralized Federated Learning through Proxy Model Sharing"

ProxyFL Code accompanying the paper "ProxyFL: Decentralized Federated Learning through Proxy Model Sharing" Authors: Shivam Kalra*, Junfeng Wen*, Jess

Layer6 Labs 14 Dec 06, 2022
Code of the paper "Shaping Visual Representations with Attributes for Few-Shot Learning (ASL)".

Shaping Visual Representations with Attributes for Few-Shot Learning This code implements the Shaping Visual Representations with Attributes for Few-S

chx_nju 9 Sep 01, 2022
GPOEO is a micro-intrusive GPU online energy optimization framework for iterative applications

GPOEO GPOEO is a micro-intrusive GPU online energy optimization framework for iterative applications. We also implement ODPP [1] as a comparison. [1]

瑞雪轻飏 8 Sep 10, 2022
null

DeformingThings4D dataset Video | Paper DeformingThings4D is an synthetic dataset containing 1,972 animation sequences spanning 31 categories of human

208 Jan 03, 2023
IOT: Instance-wise Layer Reordering for Transformer Structures

Introduction This repository contains the code for Instance-wise Ordered Transformer (IOT), which is introduced in the ICLR2021 paper IOT: Instance-wi

IOT 19 Nov 15, 2022
Federated Learning Based on Dynamic Regularization

Federated Learning Based on Dynamic Regularization This is implementation of Federated Learning Based on Dynamic Regularization. Requirements Please i

39 Jan 07, 2023
Classifying cat and dog images using Kaggle dataset

PyTorch Image Classification Classifies an image as containing either a dog or a cat (using Kaggle's public dataset), but could easily be extended to

Robert Coleman 74 Nov 22, 2022
DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical Image Segmentation

DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical Image Segmentation By Qing Xu, Wenting Duan and Na He Requirements pytorch==1.1

Qing Xu 20 Dec 09, 2022
Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation

Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation By: Zayd Hammoudeh and Daniel Lowd Paper: Arxiv Preprint Coming soo

Zayd Hammoudeh 2 Oct 08, 2022
This repository gives an example on how to preprocess the data of the HECKTOR challenge

HECKTOR 2021 challenge This repository gives an example on how to preprocess the data of the HECKTOR challenge. Any other preprocessing is welcomed an

56 Dec 01, 2022