Implementation of ToeplitzLDA for spatiotemporal stationary time series data.

Overview

ToeplitzLDA

Code for the ToeplitzLDA classifier proposed in here. The classifier conforms sklearn and can be used as a drop-in replacement for other LDA classifiers. For in-depth usage refer to the learning from label proportions (LLP) example or the example script.

Note we used Ubuntu 20.04 with python 3.8.10 to generate our results.

Getting Started / User Setup

If you only want to use this library, you can use the following setup. Note that this setup is based on a fresh Ubuntu 20.04 installation.

Getting fresh ubuntu ready

apt install python3-pip python3-venv

Python package installation

In this setup, we assume you want to run the examples that actually make use of real EEG data or the actual unsupervised speller replay. If you only want to employ ToeplitzLDA in your own spatiotemporal data / without mne and moabb then you can remove the package extra neuro, i.e. pip install toeplitzlda or pip install toeplitzlda[solver]

  1. (Optional) Install fortran Compiler. On ubuntu: apt install gfortran
  2. Create virtual environment: python3 -m venv toeplitzlda_venv
  3. Activate virtual environment: source toeplitzlda_venv/bin/activate
  4. Install toeplitzlda: pip install toeplitzlda[neuro,solver], if you dont have a fortran compiler: pip install toeplitzlda[neuro]

Check if everything works

Either clone this repo or just download the scripts/example_toeplitz_lda_bci_data.py file and run it: python example_toeplitz_lda_bci_data.py. Note that this will automatically download EEG data with a size of around 650MB.

Alternatively, you can use the scripts/example_toeplitz_lda_generated_data.py where artificial data is generated. Note however, that only stationary background noise is modeled and no interfering artifacts as is the case in, e.g., real EEG data. As a result, the overfitting effect of traditional slda on these artifacts is reduced.

Using ToeplitzLDA in place of traditional shrinkage LDA from sklearn

If you have already your own pipeline, you can simply add toeplitzlda as a dependency in your project and then replace sklearns LDA, i.e., instead of:

from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
clf = LinearDiscriminantAnalysis(solver="lsqr", shrinkage="auto")  # or eigen solver

use

from toeplitzlda.classification import ToeplitzLDA
clf = ToeplitzLDA(n_channels=your_n_channels)

where your_n_channels is the number of channels of your signal and needs to be provided for this method to work.

If you prefer using sklearn, you can only replace the covariance estimation part, note however, that this in practice (on our data) yields worse performance, as sklearn estimates the class-wise covariance matrices and averages them afterwards, whereas we remove the class-wise means and the estimate one covariance matrix from the pooled data.

So instead of:

from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
clf = LinearDiscriminantAnalysis(solver="lsqr", shrinkage="auto")  # or eigen solver

you would use

from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from toeplitzlda.classification.covariance import ToepTapLW
toep_cov = ToepTapLW(n_channels=your_n_channels)
clf = LinearDiscriminantAnalysis(solver="lsqr", covariance_estimator=toep_cov)  # or eigen solver

Development Setup

We use a fortran compiler to provide speedups for solving block-Toeplitz linear equation systems. If you are on ubuntu you can install gfortran.

We use poetry for dependency management. If you have it installed you can simply use poetry install to set up the virtual environment with all dependencies. All extra features can be installed with poetry install -E solver,neuro.

If setup does not work for you, please open an issue. We cannot guarantee support for many different platforms, but could provide a singularity image.

Learning from label proportions

Use the run_llp.py script to apply ToeplitzLDA in the LLP scenario and create the results file for the different preprocessing parameters. These can then be visualized using visualize_llp.py to create the plots shown in our publication. Note that running LLP takes a while and the two datasets will be downloaded automatically and are approximately 16GB in size. Alternatively, you can use the results provided by us that are stored in scripts/usup_replay/provided_results by moving/copying them to the location that visualize_llp.py looks for.

ERP benchmark

This is not yet available.

Note this benchmark will take quite a long time if you do not have access to a computing cluster. The public datasets (including the LLP datasets) total a size of approximately 120GB.

BLOCKING TODO: How should we handle the private datasets?

FAQ

Why is my classification performance for my stationary spatiotemporal data really bad?

Check if your data is in channel-prime order, i.e., in the flattened feature vector, you first enumerate over all channels (or some other spatially distributed sensors) for the first time point and then for the second time point and so on. If this is not the case, tell the classifier: e.g. ToeplitzLDA(n_channels=16, data_is_channel_prime=False)

Owner
Jan Sosulski
Jan Sosulski
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks This is the official repository for our paper: Sharpness-aware Quantization for Deep Neural Netw

Zhuang AI Group 30 Dec 19, 2022
Neural Scene Flow Prior (NeurIPS 2021 spotlight)

Neural Scene Flow Prior Xueqian Li, Jhony Kaesemodel Pontes, Simon Lucey Will appear on Thirty-fifth Conference on Neural Information Processing Syste

Lilac Lee 85 Jan 03, 2023
SSD-based Object Detection in PyTorch

SSD-based Object Detection in PyTorch 서강대학교 현대모비스 SW 프로그램에서 진행한 인공지능 프로젝트입니다. Jetson nano를 이용해 pre-trained network를 fine tuning시켜 차량 및 신호등 인식을 구현하였습니다

Haneul Kim 1 Nov 16, 2021
The official implementation of Variable-Length Piano Infilling (VLI).

Variable-Length-Piano-Infilling The official implementation of Variable-Length Piano Infilling (VLI). (paper: Variable-Length Music Score Infilling vi

29 Sep 01, 2022
Source code for From Stars to Subgraphs

GNNAsKernel Official code for From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness Visualizations GNN-AK(+) GNN-AK(+) with Subgra

44 Dec 19, 2022
This project aims at providing a concise, easy-to-use, modifiable reference implementation for semantic segmentation models using PyTorch.

Semantic Segmentation on PyTorch (include FCN, PSPNet, Deeplabv3, Deeplabv3+, DANet, DenseASPP, BiSeNet, EncNet, DUNet, ICNet, ENet, OCNet, CCNet, PSANet, CGNet, ESPNet, LEDNet, DFANet)

2.4k Jan 08, 2023
A pytorch implementation of Paper "Improved Training of Wasserstein GANs"

WGAN-GP An pytorch implementation of Paper "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, SciPy, Matplotlib A recent NVIDIA GPU

Marvin Cao 1.4k Dec 14, 2022
Download files from DSpace systems (because for some reason DSpace won't let you)

DSpaceDL A tool for downloading files from DSpace items. For some reason, DSpace systems have a dogshit UI, and Universities absolutely LOOOVE to use

Soumitra Shewale 5 Dec 01, 2022
CIFAR-10 Photo Classification

Image-Classification CIFAR-10 Photo Classification CIFAR-10_Dataset_Classfication CIFAR-10 Photo Classification Dataset CIFAR is an acronym that stand

ADITYA SHAH 1 Jan 05, 2022
Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

6 Dec 19, 2022
LSTM model trained on a small dataset of 3000 names written in PyTorch

LSTM model trained on a small dataset of 3000 names. Model generates names from model by selecting one out of top 3 letters suggested by model at a time until an EOS (End Of Sentence) character is no

Sahil Lamba 1 Dec 20, 2021
Cross-Modal Contrastive Learning for Text-to-Image Generation

Cross-Modal Contrastive Learning for Text-to-Image Generation This repository hosts the open source JAX implementation of XMC-GAN. Setup instructions

Google Research 94 Nov 12, 2022
Code for "LoRA: Low-Rank Adaptation of Large Language Models"

LoRA: Low-Rank Adaptation of Large Language Models This repo contains the implementation of LoRA in GPT-2 and steps to replicate the results in our re

Microsoft 394 Jan 08, 2023
Code to reproduce the experiments from our NeurIPS 2021 paper " The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective"

Code To run: python runner.py new --save SAVE_NAME --data PATH_TO_DATA_DIR --dataset DATASET --model model_name [options] --n 1000 - train - t

Geoff Pleiss 5 Dec 12, 2022
DaReCzech is a dataset for text relevance ranking in Czech

Dataset DaReCzech is a dataset for text relevance ranking in Czech. The dataset consists of more than 1.6M annotated query-documents pairs,

Seznam.cz a.s. 8 Jul 26, 2022
A repo to show how to use custom dataset to train s2anet, and change backbone to resnext101

A repo to show how to use custom dataset to train s2anet, and change backbone to resnext101

jedibobo 3 Dec 28, 2022
Pairwise learning neural link prediction for ogb link prediction

Pairwise Learning for Neural Link Prediction for OGB (PLNLP-OGB) This repository provides evaluation codes of PLNLP for OGB link property prediction t

Zhitao WANG 31 Oct 10, 2022
A Python implementation of global optimization with gaussian processes.

Bayesian Optimization Pure Python implementation of bayesian global optimization with gaussian processes. PyPI (pip): $ pip install bayesian-optimizat

fernando 6.5k Jan 02, 2023
ANEA: Automated (Named) Entity Annotation for German Domain-Specific Texts

ANEA The goal of Automatic (Named) Entity Annotation is to create a small annotated dataset for NER extracted from German domain-specific texts. Insta

Anastasia Zhukova 2 Oct 07, 2022
Build Graph Nets in Tensorflow

Graph Nets library Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet. Contact DeepMind 5.2k Jan 05, 2023