Real-time analysis of intracranial neurophysiology recordings.

Overview

py_neuromodulation

Click this button to run the "Tutorial ML with py_neuro" notebooks:

The py_neuromodulation toolbox allows for real time capable processing of multimodal electrophysiological data. The primary use is movement prediction for adaptive deep brain stimulation.

Find the documentation here https://neuromodulation.github.io/py_neuromodulation/ for example usage and parametrization.

Setup

For running this toolbox first create a new virtual conda environment:

conda env create --file=env.yml --user

The main modules include running real time enabled feature preprocessing based on iEEG BIDS data.

Different features can be enabled/disabled and parametrized in the `https://github.com/neuromodulation/py_neuromodulation/blob/main/pyneuromodulation/nm_settings.json>`_.

The current implementation mainly focuses band power and sharpwave feature estimation.

An example folder with a mock subject and derivate feature set was estimated.

To run feature estimation given the example BIDS data run in root directory.

python main.py

This will write a feature_arr.csv file in the 'examples/data/derivatives' folder.

For further documentation view ParametrizationDefinition for description of necessary parametrization files. FeatureEstimationDemo walks through an example feature estimation and explains sharpwave estimation.

Comments
  • Adopt the usage of an argument parser (argparse)

    Adopt the usage of an argument parser (argparse)

    argparse is a python package that allows for in command line options, arguments and sub-commands

    documentation: https://docs.python.org/3/library/argparse.html

    enhancement 
    opened by mousa-saeed 7
  • Conda environment + setup

    Conda environment + setup

    @timonmerk I tried to install pyneuromodulation as package, but it wasn't so easy to make it work. I did the following:

    • I modified env.yml (previously settting up the environment didn't work - and additionally env.yml was configured to install all packages via pip, now they are (except pybids) installed via conda.
    • I installed the conda environment via

    conda env create -f env.yml

    • I activated the conda environment via

    conda activate pyneuromodulation_test

    • I installed pyneuromodulation in editable mode

    pip install -r requirements.txt --editable .

    • In python, I ran

    import py_neuromodulation print(dir(py_neuromodulation))

    • Which told me that the only items in this package are

    ['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec'] <

    • Only once I had added the line: "from . import nm_BidsStream" to init.py, did I get the following output for dir():

    ['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec', 'nm_BidsStream', 'nm_IO', 'nm_bandpower', 'nm_coherence', 'nm_define_nmchannels', 'nm_eval_timing', 'nm_features', 'nm_fft', 'nm_filter', 'nm_generator', 'nm_hjorth_raw', 'nm_kalmanfilter', 'nm_normalization', 'nm_notch_filter', 'nm_plots', 'nm_projection', 'nm_rereference', 'nm_resample', 'nm_run_analysis', 'nm_sharpwaves', 'nm_stft', 'nm_stream', 'nm_test_settings']

    Questions:

    • Was this the intended way of setting up the environment and the package?
    • What could the problem with init.py be related to?
    opened by richardkoehler 4
  • Add Coherence and temporarily remove PDC and DTC

    Add Coherence and temporarily remove PDC and DTC

    @timonmerk We should maybe think about removing PDC and DTC from main again (and moving them to a branch) until we make sure that they are implemented correctly. Instead we could think about adding simple coherence, which might be less specific but easier to implement and less computationally expensive.

    enhancement 
    opened by richardkoehler 4
  • Specifying segment_lengths for frequency_ranges explicitly

    Specifying segment_lengths for frequency_ranges explicitly

    @timonmerk

    Currently the specification of segment_lengths is not very intuitive and also done implicitly, e.g. 10 == 1/10 of the given segment. Maybe we could write the segment_lenghts explicitly in milliseconds, so that they are independent such parameters as length of the data batch that is passed. So whether the data batch is 4096, 8000 or 700, the segment_length would always remain the same (e.g. 100 ms). Instead of this: "frequency_ranges": [[4, 8], [8, 12], [13, 20], [20, 35], [13, 35], [60, 80], [90, 200], [60, 200], [200, 300] ], "segment_lengths": [1, 2, 2, 3, 3, 3, 10, 10, 10, 10],

    I would imagine something like this: "frequency_ranges": [[4, 8], [8, 12], [13, 20], [20, 35], [13, 35], [60, 80], [90, 200], [60, 200], [200, 300] ], "segment_lengths": [1000, 500, 333, 333, 333, 100, 100, 100, 100],

    or even better in my opinion (because it is less prone to errors, the user is enforced to write the segment length write after the frequency range). Just to underline this argument, I noticed that there were too many segment_lengths in the settings.json file. There were 10 segment_lengths but only 9 frequency_ranges. So my suggestion: "frequency_ranges": [[[4, 8], 1000], [[8, 12], 500], [[13, 20], 333], [[20, 35], 333], [[13, 35], 333], [[60, 80], 100], [[90, 200], 100], [[60, 200], 100], [[200, 300] 100] ],

    opened by richardkoehler 4
  • Add cortical-subcortical feature estimation

    Add cortical-subcortical feature estimation

    Given two channels, calculate:

    • partial directed coherence
    • phase amplitude coupling

    Add as separate "feature" file. Needs to implement write to output dataframe

    opened by timonmerk 4
  • Optimize speed of feature normalization

    Optimize speed of feature normalization

    • Computation time of feature normalization was improved
    • Memory usage of feature normalization was improved
    • nm_eval_timing was repaired, as these changes broke the api

    closes #136

    opened by richardkoehler 3
  • Rework

    Rework "nm_settings.json"

    There are some minor issues in the nm_settings.json that we could improve:

    1. Add settings for STFT. Right now STFT requires the sampling frequency to be 1000 Hz
    2. Re-think frequency bands: Either we define them separate from FFT, STFT or bandpass, and then they can't be specifically adapted to each method. Or you can define them inside of each method (FFT, STFT, bandpass separately). But right now, FFT and STFT are "taking" the info from the bandpass settings. Also, right now the fband_names are hard-coded in nm_features.py (not sure why this is the case?)
    3. Make notch filter settings more flexible (i.e. let the user define how many harmonics he would like to remove).
    enhancement 
    opened by richardkoehler 3
  • Add possibility to

    Add possibility to "create" new channels

    Add the option in "settings.json" to "create" new channels by summation (e.g. new_channel: LFP_R_234, sum_channels: [LFP_R_2, LFP_R_3, LFP_R_4]. Not an imminent priority, but would be nice to have.

    opened by richardkoehler 3
  • Add column

    Add column "status" to M1.tsv

    @timonmerk I suggest that we add the column "status" (like in BIDS channels.tsv files) to the M1.tsv file values would be "good" or "bad" I recently encountered the problem that I wanted to exclude an ECOG channel from the ECOG average reference because it was too noisy, but there is no way to do this, except to hard-code. Alternatively, we could in theory also implement excluding channels by setting the type of the bad channel to "bad". This is not 100% logical but this would avoid for us to change the current API. However, the more sustainable and "BIDS-friendly" solution would be to implement the "status" column. I could easily imagine such a scenario:

    • We want to make real-time predictions from an ECOG grid (e.g. 6 * 5 electrodes), but to reduce computational cost we only want to select the best performing ECOG channel. However, we would like to re-reference this ECOG channel to an average ECOG reference. So the solution would be to mark all ECOG channels as type "ecog", mark the bad ECOG channels as "bad", the others as "good", and only set "used" to 1 for the best performing ECOG channel.
    opened by richardkoehler 3
  • Check in settings.py sampling frequency filter relation

    Check in settings.py sampling frequency filter relation

    For too low sampling frequencies, the current notch filter will fail. Design filter in settings.py?

    Settings.py should be written as class s.t. settings.json, df_M1 and such code isn't part of start_BIDS / start_LSL anymore.

        fs = int(np.ceil(raw_arr.info["sfreq"]))
        line_noise = int(raw_arr.info["line_freq"])
    
        # read df_M1 / create M1 if None specified
        df_M1 = pd.read_csv(PATH_M1, sep="\t") \
            if PATH_M1 is not None and os.path.isfile(PATH_M1) \
            else define_M1.set_M1(raw_arr.ch_names, raw_arr.get_channel_types())
        ch_names = list(df_M1['name'])
        refs = df_M1['rereference']
        to_ref_idx = np.array(df_M1[(df_M1['used'] == 1)].index)
        cortex_idx = np.where(df_M1.type == 'ecog')[0]
        subcortex_idx = np.array(df_M1[(df_M1["type"] == 'seeg') | (df_M1['type'] == 'dbs')
                                       | (df_M1['type'] == 'lfp')].index)
        ref_here = rereference.RT_rereference(ch_names, refs, to_ref_idx,
                                              cortex_idx, subcortex_idx,
                                              split_data=False)
    
    opened by timonmerk 3
  • Update rereference and test_features

    Update rereference and test_features

    @timonmerk So I was pulling all the information extraction from df_M1, like cortex_idx and subcortex_idx etc., into the rereference module, to make the actual analysis script (e.g. test_features) a bit cleaner. This got me thinking that we should maybe initialize the settings as a class, into which we load the settings.json and M1.tsv. We could then get the relevant information via functions e.g. ch_names(), used_chs(), cortex_idx(), out_path() etc. What do you think? The pull request doesn't implement the class yet or anything, I only pulled the cortex_idx etc. into the rereference initialization.

    opened by richardkoehler 3
  • Restructure nm_run_analysis.py

    Restructure nm_run_analysis.py

    This PR is aimed at restructuring run_analysis.py so that the data processing can now be performed independently of a py_neuromodulation data stream. This means in detail that:

    • I renamed the class Run to DataProcessor to make it clearer what it does: it processes a single batch of data. This is only my personal preference, so feel free to revert this change or give it any other name.
    • It is now DataProcessor and not the Stream that instantiates the Preprocessor and the Features classes from the given settings. This way DataProcessor can be used independently of Stream (e.g. in the case of timeflux, where timeflux handles the stream, hence the name of this branch).
    • I noticed a major bug in nm_run_analysis.py, where the settings specified "raw_resampling", but nm_run_analysis checked for "raw_resample" and this went unnoticed, because the matching statement didn't check for invalid specifications. This means that in example_BIDS.py "raw_resampling" was activated, but the data was not actually resampled. It took me a couple of hours to find this bug. I fixed the bug and also added handling the case of invalid strings, but the decoding performance has now slightly changed. Note that in this example case if one deactivates "raw_resampling", one might potentially observe improved performance.
    • After this restructuring, it is no longer necessary to specify the "preprocessing_order" and preprocessing methods to True or False respectively. The keyword "preprocessing" now takes a list (which is equivalent to the preprocessing_order before) and infers from this list which preprocessing methods to use. If you want to deactivate a preprocessing method, simply take it out of the "preprocessing" list. This change I would consider optional, but it does make the code much easier to read and helps us to avoid errors in the settings specifications.
    • I removed the use of the test_settings function, and I think that we should deprecate this function. It was a nice idea originally, but I am now more of the opinion that each method or class (e.g. NotchFilter) must do the work of checking if the passed arguments are valid anyway. So having an additional test_settings functions would mean duplication of this check, and it means that each time we change a function, we have to change test_settings too - so this means duplication of work for ourselves, too.
    • Fixed typos, like LineLenth, and added and fixed some type hints.
    • I might have forgotten about some additional changes I made ...

    Let me know what you think!

    opened by richardkoehler 0
Releases(v0.02)
Owner
Interventional Cognitive Neuromodulation - Neumann Lab Berlin
Interventional and Cognitive Neuromodulation Group
Interventional Cognitive Neuromodulation - Neumann Lab Berlin
Author's PyTorch implementation of Randomized Ensembled Double Q-Learning (REDQ) algorithm.

REDQ source code Author's PyTorch implementation of Randomized Ensembled Double Q-Learning (REDQ) algorithm. Paper link: https://arxiv.org/abs/2101.05

109 Dec 16, 2022
StarGAN v2-Tensorflow - Simple Tensorflow implementation of StarGAN v2

Official Tensorflow implementation Open ! - Clova AI StarGAN v2 — Un-official TensorFlow Implementation [Paper] [Pytorch] : Diverse Image Synthesis f

Junho Kim 110 Jul 02, 2022
Python implementation of the multistate Bennett acceptance ratio (MBAR)

pymbar Python implementation of the multistate Bennett acceptance ratio (MBAR) method for estimating expectations and free energy differences from equ

Chodera lab // Memorial Sloan Kettering Cancer Center 169 Dec 02, 2022
Yolov5 deepsort inference,使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中

使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。

813 Dec 31, 2022
PyTorch implementation for 3D human pose estimation

Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach This repository is the PyTorch implementation for the network presented in:

Xingyi Zhou 579 Dec 22, 2022
This is a simple face recognition mini project that was completed by a team of 3 members in 1 week's time

PeekingDuckling 1. Description This is an implementation of facial identification algorithm to detect and identify the faces of the 3 team members Cla

Eric Kwok 2 Jan 25, 2022
Woosung Choi 63 Nov 14, 2022
A benchmark for the task of translation suggestion

WeTS: A Benchmark for Translation Suggestion Translation Suggestion (TS), which provides alternatives for specific words or phrases given the entire d

zhyang 55 Dec 24, 2022
Active and Sample-Efficient Model Evaluation

Active Testing: Sample-Efficient Model Evaluation Hi, good to see you here! 👋 This is code for "Active Testing: Sample-Efficient Model Evaluation". P

Jannik Kossen 19 Oct 30, 2022
AFL binary instrumentation

E9AFL --- Binary AFL E9AFL inserts American Fuzzy Lop (AFL) instrumentation into x86_64 Linux binaries. This allows binaries to be fuzzed without the

242 Dec 12, 2022
Official pytorch implementation of Rainbow Memory (CVPR 2021)

Rainbow Memory: Continual Learning with a Memory of Diverse Samples

Clova AI Research 91 Dec 17, 2022
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
Let's Git - Versionsverwaltung & Open Source Hausaufgabe

Let's Git - Versionsverwaltung & Open Source Hausaufgabe Herzlich Willkommen zu dieser Hausaufgabe für unseren MOOC: Let's Git! Wir hoffen, dass Du vi

1 Dec 13, 2021
Activating More Pixels in Image Super-Resolution Transformer

HAT [Paper Link] Activating More Pixels in Image Super-Resolution Transformer Xiangyu Chen, Xintao Wang, Jiantao Zhou and Chao Dong BibTeX @article{ch

XyChen 270 Dec 27, 2022
Determined: Deep Learning Training Platform

Determined: Deep Learning Training Platform Determined is an open-source deep learning training platform that makes building models fast and easy. Det

Determined AI 2k Dec 31, 2022
This repo contains the implementation of the algorithm proposed in Off-Belief Learning, ICML 2021.

Off-Belief Learning Introduction This repo contains the implementation of the algorithm proposed in Off-Belief Learning, ICML 2021. Environment Setup

Facebook Research 32 Jan 05, 2023
Few-Shot-Intent-Detection includes popular challenging intent detection datasets with/without OOS queries and state-of-the-art baselines and results.

Few-Shot-Intent-Detection Few-Shot-Intent-Detection is a repository designed for few-shot intent detection with/without Out-of-Scope (OOS) intents. It

Jian-Guo Zhang 73 Dec 26, 2022
Multilingual Image Captioning

Multilingual Image Captioning Authors: Bhavitvya Malik, Gunjan Chhablani Demo Link: https://huggingface.co/spaces/flax-community/multilingual-image-ca

Gunjan Chhablani 32 Nov 25, 2022
This repo implements a 3D segmentation task for an airport baggage dataset.

3D CT Scan Segmentation With Occupancy Network This repo implements a 3D superresolution segmentation task for an airport baggage dataset. Our final p

Christoph Reich 2 Mar 28, 2022
Combine Tacotron2 and Hifi GAN to generate speech from text

EndToEndTextToSpeech Combine Tacotron2 and Hifi GAN to generate speech from text Download weights Hifi GAN - hifi_gan/checkpoint/ : pretrain 2.5M ste

Phạm Quốc Huy 1 Dec 18, 2021