Real-time analysis of intracranial neurophysiology recordings.

Overview

py_neuromodulation

Click this button to run the "Tutorial ML with py_neuro" notebooks:

The py_neuromodulation toolbox allows for real time capable processing of multimodal electrophysiological data. The primary use is movement prediction for adaptive deep brain stimulation.

Find the documentation here https://neuromodulation.github.io/py_neuromodulation/ for example usage and parametrization.

Setup

For running this toolbox first create a new virtual conda environment:

conda env create --file=env.yml --user

The main modules include running real time enabled feature preprocessing based on iEEG BIDS data.

Different features can be enabled/disabled and parametrized in the `https://github.com/neuromodulation/py_neuromodulation/blob/main/pyneuromodulation/nm_settings.json>`_.

The current implementation mainly focuses band power and sharpwave feature estimation.

An example folder with a mock subject and derivate feature set was estimated.

To run feature estimation given the example BIDS data run in root directory.

python main.py

This will write a feature_arr.csv file in the 'examples/data/derivatives' folder.

For further documentation view ParametrizationDefinition for description of necessary parametrization files. FeatureEstimationDemo walks through an example feature estimation and explains sharpwave estimation.

Comments
  • Adopt the usage of an argument parser (argparse)

    Adopt the usage of an argument parser (argparse)

    argparse is a python package that allows for in command line options, arguments and sub-commands

    documentation: https://docs.python.org/3/library/argparse.html

    enhancement 
    opened by mousa-saeed 7
  • Conda environment + setup

    Conda environment + setup

    @timonmerk I tried to install pyneuromodulation as package, but it wasn't so easy to make it work. I did the following:

    • I modified env.yml (previously settting up the environment didn't work - and additionally env.yml was configured to install all packages via pip, now they are (except pybids) installed via conda.
    • I installed the conda environment via

    conda env create -f env.yml

    • I activated the conda environment via

    conda activate pyneuromodulation_test

    • I installed pyneuromodulation in editable mode

    pip install -r requirements.txt --editable .

    • In python, I ran

    import py_neuromodulation print(dir(py_neuromodulation))

    • Which told me that the only items in this package are

    ['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec'] <

    • Only once I had added the line: "from . import nm_BidsStream" to init.py, did I get the following output for dir():

    ['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec', 'nm_BidsStream', 'nm_IO', 'nm_bandpower', 'nm_coherence', 'nm_define_nmchannels', 'nm_eval_timing', 'nm_features', 'nm_fft', 'nm_filter', 'nm_generator', 'nm_hjorth_raw', 'nm_kalmanfilter', 'nm_normalization', 'nm_notch_filter', 'nm_plots', 'nm_projection', 'nm_rereference', 'nm_resample', 'nm_run_analysis', 'nm_sharpwaves', 'nm_stft', 'nm_stream', 'nm_test_settings']

    Questions:

    • Was this the intended way of setting up the environment and the package?
    • What could the problem with init.py be related to?
    opened by richardkoehler 4
  • Add Coherence and temporarily remove PDC and DTC

    Add Coherence and temporarily remove PDC and DTC

    @timonmerk We should maybe think about removing PDC and DTC from main again (and moving them to a branch) until we make sure that they are implemented correctly. Instead we could think about adding simple coherence, which might be less specific but easier to implement and less computationally expensive.

    enhancement 
    opened by richardkoehler 4
  • Specifying segment_lengths for frequency_ranges explicitly

    Specifying segment_lengths for frequency_ranges explicitly

    @timonmerk

    Currently the specification of segment_lengths is not very intuitive and also done implicitly, e.g. 10 == 1/10 of the given segment. Maybe we could write the segment_lenghts explicitly in milliseconds, so that they are independent such parameters as length of the data batch that is passed. So whether the data batch is 4096, 8000 or 700, the segment_length would always remain the same (e.g. 100 ms). Instead of this: "frequency_ranges": [[4, 8], [8, 12], [13, 20], [20, 35], [13, 35], [60, 80], [90, 200], [60, 200], [200, 300] ], "segment_lengths": [1, 2, 2, 3, 3, 3, 10, 10, 10, 10],

    I would imagine something like this: "frequency_ranges": [[4, 8], [8, 12], [13, 20], [20, 35], [13, 35], [60, 80], [90, 200], [60, 200], [200, 300] ], "segment_lengths": [1000, 500, 333, 333, 333, 100, 100, 100, 100],

    or even better in my opinion (because it is less prone to errors, the user is enforced to write the segment length write after the frequency range). Just to underline this argument, I noticed that there were too many segment_lengths in the settings.json file. There were 10 segment_lengths but only 9 frequency_ranges. So my suggestion: "frequency_ranges": [[[4, 8], 1000], [[8, 12], 500], [[13, 20], 333], [[20, 35], 333], [[13, 35], 333], [[60, 80], 100], [[90, 200], 100], [[60, 200], 100], [[200, 300] 100] ],

    opened by richardkoehler 4
  • Add cortical-subcortical feature estimation

    Add cortical-subcortical feature estimation

    Given two channels, calculate:

    • partial directed coherence
    • phase amplitude coupling

    Add as separate "feature" file. Needs to implement write to output dataframe

    opened by timonmerk 4
  • Optimize speed of feature normalization

    Optimize speed of feature normalization

    • Computation time of feature normalization was improved
    • Memory usage of feature normalization was improved
    • nm_eval_timing was repaired, as these changes broke the api

    closes #136

    opened by richardkoehler 3
  • Rework

    Rework "nm_settings.json"

    There are some minor issues in the nm_settings.json that we could improve:

    1. Add settings for STFT. Right now STFT requires the sampling frequency to be 1000 Hz
    2. Re-think frequency bands: Either we define them separate from FFT, STFT or bandpass, and then they can't be specifically adapted to each method. Or you can define them inside of each method (FFT, STFT, bandpass separately). But right now, FFT and STFT are "taking" the info from the bandpass settings. Also, right now the fband_names are hard-coded in nm_features.py (not sure why this is the case?)
    3. Make notch filter settings more flexible (i.e. let the user define how many harmonics he would like to remove).
    enhancement 
    opened by richardkoehler 3
  • Add possibility to

    Add possibility to "create" new channels

    Add the option in "settings.json" to "create" new channels by summation (e.g. new_channel: LFP_R_234, sum_channels: [LFP_R_2, LFP_R_3, LFP_R_4]. Not an imminent priority, but would be nice to have.

    opened by richardkoehler 3
  • Add column

    Add column "status" to M1.tsv

    @timonmerk I suggest that we add the column "status" (like in BIDS channels.tsv files) to the M1.tsv file values would be "good" or "bad" I recently encountered the problem that I wanted to exclude an ECOG channel from the ECOG average reference because it was too noisy, but there is no way to do this, except to hard-code. Alternatively, we could in theory also implement excluding channels by setting the type of the bad channel to "bad". This is not 100% logical but this would avoid for us to change the current API. However, the more sustainable and "BIDS-friendly" solution would be to implement the "status" column. I could easily imagine such a scenario:

    • We want to make real-time predictions from an ECOG grid (e.g. 6 * 5 electrodes), but to reduce computational cost we only want to select the best performing ECOG channel. However, we would like to re-reference this ECOG channel to an average ECOG reference. So the solution would be to mark all ECOG channels as type "ecog", mark the bad ECOG channels as "bad", the others as "good", and only set "used" to 1 for the best performing ECOG channel.
    opened by richardkoehler 3
  • Check in settings.py sampling frequency filter relation

    Check in settings.py sampling frequency filter relation

    For too low sampling frequencies, the current notch filter will fail. Design filter in settings.py?

    Settings.py should be written as class s.t. settings.json, df_M1 and such code isn't part of start_BIDS / start_LSL anymore.

        fs = int(np.ceil(raw_arr.info["sfreq"]))
        line_noise = int(raw_arr.info["line_freq"])
    
        # read df_M1 / create M1 if None specified
        df_M1 = pd.read_csv(PATH_M1, sep="\t") \
            if PATH_M1 is not None and os.path.isfile(PATH_M1) \
            else define_M1.set_M1(raw_arr.ch_names, raw_arr.get_channel_types())
        ch_names = list(df_M1['name'])
        refs = df_M1['rereference']
        to_ref_idx = np.array(df_M1[(df_M1['used'] == 1)].index)
        cortex_idx = np.where(df_M1.type == 'ecog')[0]
        subcortex_idx = np.array(df_M1[(df_M1["type"] == 'seeg') | (df_M1['type'] == 'dbs')
                                       | (df_M1['type'] == 'lfp')].index)
        ref_here = rereference.RT_rereference(ch_names, refs, to_ref_idx,
                                              cortex_idx, subcortex_idx,
                                              split_data=False)
    
    opened by timonmerk 3
  • Update rereference and test_features

    Update rereference and test_features

    @timonmerk So I was pulling all the information extraction from df_M1, like cortex_idx and subcortex_idx etc., into the rereference module, to make the actual analysis script (e.g. test_features) a bit cleaner. This got me thinking that we should maybe initialize the settings as a class, into which we load the settings.json and M1.tsv. We could then get the relevant information via functions e.g. ch_names(), used_chs(), cortex_idx(), out_path() etc. What do you think? The pull request doesn't implement the class yet or anything, I only pulled the cortex_idx etc. into the rereference initialization.

    opened by richardkoehler 3
  • Restructure nm_run_analysis.py

    Restructure nm_run_analysis.py

    This PR is aimed at restructuring run_analysis.py so that the data processing can now be performed independently of a py_neuromodulation data stream. This means in detail that:

    • I renamed the class Run to DataProcessor to make it clearer what it does: it processes a single batch of data. This is only my personal preference, so feel free to revert this change or give it any other name.
    • It is now DataProcessor and not the Stream that instantiates the Preprocessor and the Features classes from the given settings. This way DataProcessor can be used independently of Stream (e.g. in the case of timeflux, where timeflux handles the stream, hence the name of this branch).
    • I noticed a major bug in nm_run_analysis.py, where the settings specified "raw_resampling", but nm_run_analysis checked for "raw_resample" and this went unnoticed, because the matching statement didn't check for invalid specifications. This means that in example_BIDS.py "raw_resampling" was activated, but the data was not actually resampled. It took me a couple of hours to find this bug. I fixed the bug and also added handling the case of invalid strings, but the decoding performance has now slightly changed. Note that in this example case if one deactivates "raw_resampling", one might potentially observe improved performance.
    • After this restructuring, it is no longer necessary to specify the "preprocessing_order" and preprocessing methods to True or False respectively. The keyword "preprocessing" now takes a list (which is equivalent to the preprocessing_order before) and infers from this list which preprocessing methods to use. If you want to deactivate a preprocessing method, simply take it out of the "preprocessing" list. This change I would consider optional, but it does make the code much easier to read and helps us to avoid errors in the settings specifications.
    • I removed the use of the test_settings function, and I think that we should deprecate this function. It was a nice idea originally, but I am now more of the opinion that each method or class (e.g. NotchFilter) must do the work of checking if the passed arguments are valid anyway. So having an additional test_settings functions would mean duplication of this check, and it means that each time we change a function, we have to change test_settings too - so this means duplication of work for ourselves, too.
    • Fixed typos, like LineLenth, and added and fixed some type hints.
    • I might have forgotten about some additional changes I made ...

    Let me know what you think!

    opened by richardkoehler 0
Releases(v0.02)
Owner
Interventional Cognitive Neuromodulation - Neumann Lab Berlin
Interventional and Cognitive Neuromodulation Group
Interventional Cognitive Neuromodulation - Neumann Lab Berlin
Code for Mining the Benefits of Two-stage and One-stage HOI Detection

Status: Archive (code is provided as-is, no updates expected) PPO-EWMA [Paper] This is code for training agents using PPO-EWMA and PPG-EWMA, introduce

OpenAI 33 Dec 15, 2022
Fast, differentiable sorting and ranking in PyTorch

Torchsort Fast, differentiable sorting and ranking in PyTorch. Pure PyTorch implementation of Fast Differentiable Sorting and Ranking (Blondel et al.)

Teddy Koker 655 Jan 04, 2023
Dynamic hair modeling from monocular videos using deep neural networks

Dynamic Hair Modeling The source code of the networks for our paper "Dynamic hair modeling from monocular videos using deep neural networks" (SIGGRAPH

53 Oct 18, 2022
dualPC.R contains the R code for the main functions.

dualPC.R contains the R code for the main functions. dualPC_sim.R contains an example run with the different PC versions; it calls dualPC_algs.R whic

3 May 30, 2022
DeiT: Data-efficient Image Transformers

DeiT: Data-efficient Image Transformers This repository contains PyTorch evaluation code, training code and pretrained models for DeiT (Data-Efficient

Facebook Research 3.2k Jan 06, 2023
Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution'

DASR Paper Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution Jie Liang, Hui Zeng, and Lei Zhang. In arxiv preprint. Abs

81 Dec 28, 2022
Gradient representations in ReLU networks as similarity functions

Gradient representations in ReLU networks as similarity functions by Dániel Rácz and Bálint Daróczy. This repo contains the python code related to our

1 Oct 08, 2021
FishNet: One Stage to Detect, Segmentation and Pose Estimation

FishNet FishNet: One Stage to Detect, Segmentation and Pose Estimation Introduction In this project, we combine target detection, instance segmentatio

1 Oct 05, 2022
Pun Detection and Location

Pun Detection and Location “The Boating Store Had Its Best Sail Ever”: Pronunciation-attentive Contextualized Pun Recognition Yichao Zhou, Jyun-yu Jia

lawson 3 May 13, 2022
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.

faceswap-GAN Adding Adversarial loss and perceptual loss (VGGface) to deepfakes'(reddit user) auto-encoder architecture. Updates Date Update 2018-08-2

3.2k Dec 30, 2022
TrTr: Visual Tracking with Transformer

TrTr: Visual Tracking with Transformer We propose a novel tracker network based on a powerful attention mechanism called Transformer encoder-decoder a

趙 漠居(Zhao, Moju) 66 Dec 27, 2022
使用yolov5训练自己数据集(详细过程)并通过flask部署

使用yolov5训练自己的数据集(详细过程)并通过flask部署 依赖库 torch torchvision numpy opencv-python lxml tqdm flask pillow tensorboard matplotlib pycocotools Windows,请使用 pycoc

HB.com 19 Dec 28, 2022
Let Python optimize the best stop loss and take profits for your TradingView strategy.

TradingView Machine Learning TradeView is a free and open source Trading View bot written in Python. It is designed to support all major exchanges. It

Robert Roman 473 Jan 09, 2023
A pytorch implementation of Reading Wikipedia to Answer Open-Domain Questions.

DrQA A pytorch implementation of the ACL 2017 paper Reading Wikipedia to Answer Open-Domain Questions (DrQA). Reading comprehension is a task to produ

Runqi Yang 394 Nov 08, 2022
Exact Pareto Optimal solutions for preference based Multi-Objective Optimization

Exact Pareto Optimal solutions for preference based Multi-Objective Optimization

Debabrata Mahapatra 40 Dec 24, 2022
An AFL implementation with UnTracer (our coverage-guided tracer)

UnTracer-AFL This repository contains an implementation of our prototype coverage-guided tracing framework UnTracer in the popular coverage-guided fuz

113 Dec 17, 2022
Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling

VAE with Volume-Preserving Flows This is a PyTorch implementation of two volume-preserving flows as described in the following papers: Tomczak, J. M.,

Jakub Tomczak 87 Dec 26, 2022
Using python and scikit-learn to make stock predictions

MachineLearningStocks in python: a starter project and guide EDIT as of Feb 2021: MachineLearningStocks is no longer actively maintained MachineLearni

Robert Martin 1.3k Dec 29, 2022
Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Tacotron 2 (without wavenet) PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions. This implementati

NVIDIA Corporation 4.1k Jan 03, 2023
An LSTM for time-series classification

Update 10-April-2017 And now it works with Python3 and Tensorflow 1.1.0 Update 02-Jan-2017 I updated this repo. Now it works with Tensorflow 0.12. In

Rob Romijnders 391 Dec 27, 2022