BCI datasets and algorithms

Related tags

Algorithmsbrainda
Overview

Brainda

Welcome!

First and foremost, Welcome!

Thank you for visiting the Brainda repository which was initially released at this repo and reorganized here. This project is meant to provide datasets and decoding algorithms for BCI research, using python, as a part of the MetaBCI project which aims to provide a python platform for BCI users to design paradigm, collect data, process signals, present feedbacks and drive robots.

This document is a hub to give you some information about the project. Jump straight to one of the sections below, or just scroll down to find out more.

What are we doing?

The problem

  • BCI datasets come in different formats and standards
  • It's tedious to figure out the details of the data
  • Lack of python implementations of modern decoding algorithms

If someone new to the BCI wants to do some interesting research, most of their time would be spent on preprocessing the data or reproducing the algorithm in the paper.

The solution

The Brainda will:

  • Allow users to load the data easily without knowing the details
  • Provide flexible hook functions to control the preprocessing flow
  • Provide the latest decoding algorithms

The goal of the Brainda is to make researchers focus on improving their own BCI algorithms without wasting too much time on preliminary preparations.

Features

  • Improvements to MOABB APIs

    • add hook functions to control the preprocessing flow more easily
    • use joblib to accelerate the data loading
    • add proxy options for network conneciton issues
    • add more information in the meta of data
    • other small changes
  • Supported Datasets

    • MI Datasets
      • AlexMI
      • BNCI2014001, BNCI2014004
      • PhysionetMI, PhysionetME
      • Cho2017
      • MunichMI
      • Schirrmeister2017
      • Weibo2014
      • Zhou2016
    • SSVEP Datasets
      • Nakanishi2015
      • Wang2016
      • BETA
  • Implemented BCI algorithms

    • Decomposition Methods
      • SPoC, CSP, MultiCSP and FBCSP
      • CCA, itCCA, MsCCA, ExtendCCA, ttCCA, MsetCCA, MsetCCA-R, TRCA, TRCA-R, SSCOR and TDCA
      • DSP
    • Manifold Learning
      • Basic Riemannian Geometry operations
      • Alignment methods
      • Riemann Procustes Analysis
    • Deep Learning
      • ShallowConvNet
      • EEGNet
      • ConvCA
      • GuneyNet
    • Transfer Learning
      • MEKT
      • LST

Installation

  1. Clone the repo
    git clone https://github.com/TBC-TJU/brainda.git
  2. Change to the project directory
    cd brainda
  3. Install all requirements
    pip install -r requirements.txt 
  4. Install brainda package with the editable mode
    pip install -e .

Usage

Data Loading

In basic case, we can load data with the recommended options from the dataset maker.

from brainda.datasets import AlexMI
from brainda.paradigms import MotorImagery

dataset = AlexMI() # declare the dataset
paradigm = MotorImagery(
    channels=None, 
    events=None,
    intervals=None,
    srate=None
) # declare the paradigm, use recommended Options

print(dataset) # see basic dataset information

# X,y are numpy array and meta is pandas dataFrame
X, y, meta = paradigm.get_data(
    dataset, 
    subjects=dataset.subjects, 
    return_concat=True, 
    n_jobs=None, 
    verbose=False)
print(X.shape)
print(meta)

If you don't have the dataset yet, the program would automatically download a local copy, generally in your ~/mne_data folder. However, you can always download the dataset in advance and store it in your specific folder.

dataset.download_all(
    path='/your/datastore/folder', # save folder
    force_update=False, # re-download even if the data exist
    proxies=None, # add proxy if you need, the same as the Request package
    verbose=None
)

# If you encounter network connection issues, try this
# dataset.download_all(
#     path='/your/datastore/folder', # save folder
#     force_update=False, # re-download even if the data exist
#     proxies={
#         'http': 'socks5://user:[email protected]:port',
#         'https': 'socks5://user:[email protected]:port'
#     },
#     verbose=None
# )

You can also choose channels, events, intervals, srate, and subjects yourself.

paradigm = MotorImagery(
    channels=['C3', 'CZ', 'C4'], 
    events=['right_hand', 'feet'],
    intervals=[(0, 2)], # 2 seconds
    srate=128
)

X, y, meta = paradigm.get_data(
    dataset, 
    subjects=[2, 4], 
    return_concat=True, 
    n_jobs=None, 
    verbose=False)
print(X.shape)
print(meta)

or use different intervals for events. In this case, X, y and meta should be returned in dict.

dataset = AlexMI()
paradigm = MotorImagery(
    channels=['C3', 'CZ', 'C4'], 
    events=['right_hand', 'feet'],
    intervals=[(0, 2), (0, 1)], # 2s for right_hand, 1s for feet
    srate=128
)

X, y, meta = paradigm.get_data(
    dataset, 
    subjects=[2, 4], 
    return_concat=False, 
    n_jobs=None, 
    verbose=False)
print(X['right_hand'].shape, X['feet'].shape)

Preprocessing

Here is the flow of paradigm.get_data function:

brainda provides 3 hooks that enable you to control the preprocessing flow in paradigm.get_data. With these hooks, you can operate data just like MNE typical flow:

dataset = AlexMI()
paradigm = MotorImagery()

# add 6-30Hz bandpass filter in raw hook
def raw_hook(raw, caches):
    # do something with raw object
    raw.filter(6, 30, 
        l_trans_bandwidth=2, 
        h_trans_bandwidth=5, 
        phase='zero-double')
    caches['raw_stage'] = caches.get('raw_stage', -1) + 1
    return raw, caches

def epochs_hook(epochs, caches):
    # do something with epochs object
    print(epochs.event_id)
    caches['epoch_stage'] = caches.get('epoch_stage', -1) + 1
    return epochs, caches

def data_hook(X, y, meta, caches):
    # retrive caches from the last stage
    print("Raw stage:{},Epochs stage:{}".format(caches['raw_stage'], caches['epoch_stage']))
    # do something with X, y, and meta
    caches['data_stage'] = caches.get('data_stage', -1) + 1
    return X, y, meta, caches

paradigm.register_raw_hook(raw_hook)
paradigm.register_epochs_hook(epochs_hook)
paradigm.register_data_hook(data_hook)

X, y, meta = paradigm.get_data(
    dataset, 
    subjects=[1], 
    return_concat=True, 
    n_jobs=None, 
    verbose=False)

If the dataset maker provides these hooks in the dataset, brainda would call these hooks implictly. But you can always replace them with the above code.

Machine Learning Pipeline

Now it's time to do some real BCI algorithms. Here is a demo of CSP for 2-class MI:

import numpy as np

from sklearn.svm import SVC
from sklearn.pipeline import make_pipeline

from brainda.datasets import AlexMI
from brainda.paradigms import MotorImagery
from brainda.algorithms.utils.model_selection import (
    set_random_seeds,
    generate_kfold_indices, match_kfold_indices)
from brainda.algorithms.decomposition import CSP

dataset = AlexMI()
paradigm = MotorImagery(events=['right_hand', 'feet'])

# add 6-30Hz bandpass filter in raw hook
def raw_hook(raw, caches):
    # do something with raw object
    raw.filter(6, 30, l_trans_bandwidth=2, h_trans_bandwidth=5, phase='zero-double', verbose=False)
    return raw, caches

paradigm.register_raw_hook(raw_hook)

X, y, meta = paradigm.get_data(
    dataset, 
    subjects=[3], 
    return_concat=True, 
    n_jobs=None, 
    verbose=False)

# 5-fold cross validation
set_random_seeds(38)
kfold = 5
indices = generate_kfold_indices(meta, kfold=kfold)

# CSP with SVC classifier
estimator = make_pipeline(*[
    CSP(n_components=4),
    SVC()
])

accs = []
for k in range(kfold):
    train_ind, validate_ind, test_ind = match_kfold_indices(k, meta, indices)
    # merge train and validate set
    train_ind = np.concatenate((train_ind, validate_ind))
    p_labels = estimator.fit(X[train_ind], y[train_ind]).predict(X[test_ind])
    accs.append(np.mean(p_labels==y[test_ind]))
print(np.mean(accs))

If everything is fine, you will get the accuracy about 0.75.

Who are we?

The MetaBCI project is carried out by researchers from

  • Academy of Medical Engineering and Translational Medicine, Tianjin University, China
  • Tianjin Brain Center, China

Dr.Lichao Xu is the main contributor to the Brainda repository.

What do we need?

You! In whatever way you can help.

We need expertise in programming, user experience, software sustainability, documentation and technical writing and project management.

We'd love your feedback along the way.

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated. Especially welcome to submit BCI algorithms.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Email: [email protected]

Acknowledgements

A python implementation of the Basic Photometric Stereo Algorithm

Photometric-Stereo A python implementation of the Basic Photometric Stereo Algorithm Result Usage run Photometric_Stereo.py Code Tree |data #原始数据,tga格

20 Dec 19, 2022
Python package to monitor the power consumption of any algorithm

CarbonAI This project aims at creating a python package that allows you to monitor the power consumption of any python function. Documentation The com

Capgemini Invent France 36 Nov 11, 2022
Distributed algorithms, reimplemented for fun and practice

Distributed Algorithms Playground for reimplementing and experimenting with algorithms for distributed computing. Usage Running the code for Ring-AllR

Mahan Tourkaman 1 Oct 16, 2022
A raw implementation of the nearest insertion algorithm to resolve TSP problems in a TXT format.

TSP-Nearest-Insertion A raw implementation of the nearest insertion algorithm to resolve TSP problems in a TXT format. Instructions Load a txt file wi

sjas_Phantom 1 Dec 02, 2021
A pure Python implementation of a mixed effects random forest (MERF) algorithm

Mixed Effects Random Forest This repository contains a pure Python implementation of a mixed effects random forest (MERF) algorithm. It can be used, o

Manifold 199 Dec 06, 2022
Parameterising Simulated Annealing for the Travelling Salesman Problem

Parameterising Simulated Annealing for the Travelling Salesman Problem Abstract The Travelling Salesman Problem is a well known NP-Hard problem. Given

Gary Sun 55 Jun 15, 2022
Implementation of Apriori Algorithm for Association Analysis

Implementation of Apriori Algorithm for Association Analysis

3 Nov 14, 2021
Repository for data structure and algorithms in Python for coding interviews

Python Data Structures and Algorithms This repository contains questions requiring implementation of data structures and algorithms concepts. It is us

Prabhu Pant 1.9k Jan 01, 2023
An open source algorithm and dataset for finding poop in pictures.

The shitspotter module is where I will be work on the "shitspotter" poop-detection algorithm and dataset. The primary goal of this work is to allow for the creation of a phone app that finds where yo

Jon Crall 29 Nov 29, 2022
SortingAlgorithmVisualization - A place for me to learn about sorting algorithms

SortingAlgorithmVisualization A place for me to learn about sorting algorithms.

1 Jan 15, 2022
Genetic algorithm which evolves aoe2 DE ai scripts

AlphaScripter Use the power of genetic algorithms to evolve AI scripts for Age of Empires II : Definitive Edition. For now this package runs in AOC Us

6 Nov 04, 2022
An implementation of ordered dithering algorithm in python as multimedia course project

One way of minimizing the size of an image is to simply reduce the number of bits you use to represent each pixel.

7 Dec 02, 2022
Given a list of tickers, this algorithm generates a recommended portfolio for high-risk investors.

RiskyPortfolioGenerator Given a list of tickers, this algorithm generates a recommended portfolio for high-risk investors. Working in a group, we crea

Victoria Zhao 2 Jan 13, 2022
A simple python application to visualize sorting algorithms.

Visualize sorting algorithms A simple python application to visualize sorting algorithms. Sort Algorithms Name Function Name O( ) Bubble Sort bubble_s

Duc Tran 3 Apr 01, 2022
FPE - Format Preserving Encryption with FF3 in Python

ff3 - Format Preserving Encryption in Python An implementation of the NIST approved FF3 and FF3-1 Format Preserving Encryption (FPE) algorithms in Pyt

Privacy Logistics 42 Dec 16, 2022
Implementation of an ordered dithering algorithm used in computer graphics

Ordered Dithering Project In this project, we use an ordered dithering method to turn an RGB image, first to a gray scale image and then, turn the gra

1 Oct 26, 2021
Slight modification to one of the Facebook Salina examples, to test the A2C algorithm on financial series.

Facebook Salina - Gym_AnyTrading Slight modification of Facebook Salina Reinforcement Learning - A2C GPU example for financial series. The gym FOREX d

Francesco Bardozzo 5 Mar 14, 2022
Zipline, a Pythonic Algorithmic Trading Library

Zipline, a Pythonic Algorithmic Trading Library

Stefan Jansen 463 Jan 08, 2023
PickMush - A mini study/project on boosting algorithm

PickMush A mini project implementing Boosting Author Shashwat Vaibhav What does it do? Classifies whether Mushroom is edible or is non-edible (binary

Shashwat Vaibahav 3 Nov 08, 2022
Algorithm for Cutting Stock Problem using Google OR-Tools. Link to the tool:

Cutting Stock Problem Cutting Stock Problem (CSP) deals with planning the cutting of items (rods / sheets) from given stock items (which are usually o

Emad Ehsan 87 Dec 31, 2022