EEGEyeNet is benchmark to evaluate ET prediction based on EEG measurements with an increasing level of difficulty

Overview

Introduction EEGEyeNet

EEGEyeNet is a benchmark to evaluate ET prediction based on EEG measurements with an increasing level of difficulty.

Overview

The repository consists of general functionality to run the benchmark and custom implementation of different machine learning models. We offer to run standard ML models (e.g. kNN, SVR, etc.) on the benchmark. The implementation can be found in the StandardML_Models directory.

Additionally, we implemented a variety of deep learning models. These are implemented and can be run in both pytorch and tensorflow.

The benchmark consists of three tasks: LR (left-right), Direction (Angle, Amplitude) and Coordinates (x,y)

Installation (Environment)

There are many dependencies in this benchmark and we propose to use anaconda as package manager.

You can install a full environment to run all models (standard machine learning and deep learning models in both pytorch and tensorflow) from the eegeyenet_benchmark.yml file. To do so, run:

conda env create -f eegeyenet_benchmark.yml

Otherwise you can also only create a minimal environment that is able to run the models that you want to try (see following section).

General Requirements

Create a new conda environment:

conda create -n eegeyenet_benchmark python=3.8.5 

First install the general_requirements.txt

conda install --file general_requirements.txt 

Pytorch Requirements

If you want to run the pytorch DL models, first install pytorch in the recommended way. For Linux users with GPU support this is:

conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch 

For other installation types and cuda versions, visit pytorch.org.

Tensorflow Requirements

If you want to run the tensorflow DL models, run

conda install --file tensorflow_requirements.txt 

Standard ML Requirements

If you want to run the standard ML models, run

conda install --file standard_ml_requirements.txt 

This should be installed after installing pytorch to not risk any dependency issues that have to be resolved by conda.

Configuration

The model configuration takes place in hyperparameters.py. The training configuration is contained in config.py.

config.py

We start by explaining the settings that can be made for running the benchmark:

Choose the task to run in the benchmark, e.g.

config['task'] = 'LR_task'

For some tasks we offer data from multiple paradigms. Choose the dataset used for the task, e.g.

config['dataset'] = 'antisaccade'

Choose the preprocessing variant, e.g.

config['preprocessing'] = 'min'

Choose data preprocessed with Hilbert transformation. Set to True for the standard ML models:

config['feature_extraction'] = True

Include our standard ML models into the benchmark run:

config['include_ML_models'] = True 

Include our deep learning models into the benchmark run:

config['include_DL_models'] = True

Include your own models as specified in hyperparameters.py. For instructions on how to create your own custom models see further below.

config['include_your_models'] = True

Include dummy models for comparison into the benchmark run:

config['include_dummy_models'] = True

You can either choose to train models or use existing ones in /run/ and perform inference with them. Set

config['retrain'] = True 
config['save_models'] = True 

to train your specified models. Set both to False if you want to load existing models and perform inference. In this case specify the path to your existing model directory under

config['load_experiment_dir'] = path/to/your/model 

In the model configuration section you can specify which framework you want to use. You can run our deep learning models in both pytorch and tensorflow. Just specify it in config.py, make sure you set up the environment as explained above and everything specific to the framework will be handled in the background.

config.py also allows to configure hyperparameters such as the learning rate, and enable early stopping of models.

hyperparameters.py

Here we define our models. Standard ML models and deep learning models are configured in a dictionary which contains the object of the model and hyperparameters that are passed when the object is instantiated.

You can add your own models in the your_models dictionary. Specify the models for each task separately. Make sure to enable all the models that you want to run in config.py.

Running the benchmark

Create a /runs directory to save files while running models on the benchmark.

benchmark.py

In benchmark.py we load all models specified in hyperparameters.py. Each model is fitted and then evaluated with the scoring function corresponding to the task that is benchmarked.

main.py

To start the benchmark, run

python3 main.py

A directory of the current run is created, containing a training log, saving console output and model checkpoints of all runs.

Add Custom Models

To benchmark models we use a common interface we call trainer. A trainer is an object that implements the following methods:

fit() 
predict() 
save() 
load() 

Implementation of custom models

To implement your own custom model make sure that you create a class that implements the above methods. If you use library models, make sure to wrap them into a class that implements above interface used in our benchmark.

Adding custom models to our benchmark pipeline

In hyperparameters.py add your custom models into the your_models dictionary. You can add objects that implement the above interface. Make sure to enable your custom models in config.py.

Owner
Ard Kastrati
Ard Kastrati
It is the assignment for COMP 576 in Rice University

COMP-576 It is the assignment for COMP 576 in Rice University There are two programming assignments and one Final Project. Assignment 1: It is a MLP a

Maojie Tang 1 Nov 25, 2021
Session-aware Item-combination Recommendation with Transformer Network

Session-aware Item-combination Recommendation with Transformer Network 2nd place (0.39224) code and report for IEEE BigData Cup 2021 Track1 Report EDA

Tzu-Heng Lin 6 Mar 10, 2022
Luminous is a framework for testing the performance of Embodied AI (EAI) models in indoor tasks.

Luminous is a framework for testing the performance of Embodied AI (EAI) models in indoor tasks. Generally, we intergrete different kind of functional

28 Jan 08, 2023
AOT (Associating Objects with Transformers) in PyTorch

An efficient modular implementation of Associating Objects with Transformers for Video Object Segmentation in PyTorch

162 Dec 14, 2022
Official implementation of the paper DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows

DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows Official implementation of the paper DeFlow: Learning Complex Im

Valentin Wolf 86 Nov 16, 2022
Code for reproducing our paper: LMSOC: An Approach for Socially Sensitive Pretraining

LMSOC: An Approach for Socially Sensitive Pretraining Code for reproducing the paper LMSOC: An Approach for Socially Sensitive Pretraining to appear a

Twitter Research 11 Dec 20, 2022
Designing a Practical Degradation Model for Deep Blind Image Super-Resolution (ICCV, 2021) (PyTorch) - We released the training code!

Designing a Practical Degradation Model for Deep Blind Image Super-Resolution Kai Zhang, Jingyun Liang, Luc Van Gool, Radu Timofte Computer Vision Lab

Kai Zhang 804 Jan 08, 2023
Robot Hacking Manual (RHM). From robotics to cybersecurity. Papers, notes and writeups from a journey into robot cybersecurity.

RHM: Robot Hacking Manual Download in PDF RHM v0.4 ┃ Read online The Robot Hacking Manual (RHM) is an introductory series about cybersecurity for robo

Víctor Mayoral Vilches 233 Dec 30, 2022
Utility code for use with PyXLL

pyxll-utils There is no need to use this package as of PyXLL 5. All features from this package are now provided by PyXLL. If you were using this packa

PyXLL 10 Dec 18, 2021
[CVPR 2021 Oral] Variational Relational Point Completion Network

VRCNet: Variational Relational Point Completion Network This repository contains the PyTorch implementation of the paper: Variational Relational Point

PL 121 Dec 12, 2022
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 19 Nov 05, 2022
Api's bulid in Flask perfom to manage Todo Task.

Citymall-task Api's bulid in Flask perfom to manage Todo Task. Installation Requrements : Python: 3.10.0 MongoDB create .env file with variables DB_UR

Aisha Tayyaba 1 Dec 17, 2021
Graph-total-spanning-trees - A Python script to get total number of Spanning Trees in a Graph

Total number of Spanning Trees in a Graph This is a python script just written f

Mehdi I. 0 Jul 18, 2022
Self-Learning - Books Papers, Courses & more I have to learn soon

Self-Learning This repository is intended to be used for personal use, all rights reserved to respective owners, please cite original authors and ask

Achint Chaudhary 968 Jan 02, 2022
ViDT: An Efficient and Effective Fully Transformer-based Object Detector

ViDT: An Efficient and Effective Fully Transformer-based Object Detector by Hwanjun Song1, Deqing Sun2, Sanghyuk Chun1, Varun Jampani2, Dongyoon Han1,

NAVER AI 262 Dec 27, 2022
EZ graph is an easy to use AI solution that allows you to make and train your neural networks without a single line of code.

EZ-Graph EZ Graph is a GUI that allows users to make and train neural networks without writing a single line of code. Requirements python 3 pandas num

1 Jul 03, 2022
Synthetic Humans for Action Recognition, IJCV 2021

SURREACT: Synthetic Humans for Action Recognition from Unseen Viewpoints Gül Varol, Ivan Laptev and Cordelia Schmid, Andrew Zisserman, Synthetic Human

Gul Varol 59 Dec 14, 2022
Genetic feature selection module for scikit-learn

sklearn-genetic Genetic feature selection module for scikit-learn Genetic algorithms mimic the process of natural selection to search for optimal valu

Manuel Calzolari 260 Dec 14, 2022
Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

COResets and Data Subset selection Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order

decile-team 244 Jan 09, 2023
PyTorch implementation of GLOM

GLOM PyTorch implementation of GLOM, Geoffrey Hinton's new idea that integrates concepts from neural fields, top-down-bottom-up processing, and attent

Yeonwoo Sung 20 Aug 17, 2022