Evaluating deep transfer learning for whole-brain cognitive decoding

Overview

Evaluating deep transfer learning for whole-brain cognitive decoding

This README file contains the following sections:

Project description

This project provides two main packages (see src/) that allow to apply DeepLight (see below) to the task-fMRI data of the Human Connectome Project (HCP):

  • deeplight is a simple python package that provides easy access to two pre-trained DeepLight architectures (2D-DeepLight and 3D-DeepLight; see below), designed for cognitive decoding of whole-brain fMRI data. Both architecturs were pre-trained with the fMRI data of 400 individuals in six of the seven HCP experimental tasks (all tasks except for the working memory task, which we left out for testing purposes; click here for details on the HCP data).
  • hcprepis a simple python package that allows to easily download the HCP task-fMRI data in a preprocessed format via the Amazon Web Services (AWS) S3 storage system and to transform these data into the tensorflow records data format.

Repository organization

├── poetry.lock         <- Overview of project dependencies
├── pyproject.toml      <- Lists details of installed dependencies
├── README.md           <- This README file
├── .gitignore          <- Specifies files that git should ignore
|
├── scrips/
|    ├── decode.py      <- An example of how to decode fMRI data with `deeplight`
|    ├── download.py    <- An example of how to download the preprocessed HCP fMRI data with `hcprep`
|    ├── interpret.py   <- An example of how to interpret fMRI data with `deeplight`
|    └── preprocess.sh  <- An example of how to preprocess fMRI data with `hcprep`
|    └── train.py       <- An example of how to train with `hcprep`
|
└── src/
|    ├── deeplight/
|    |    └──           <- `deeplight` package
|    ├── hcprep/
|    |    └──           <- 'hcprep' package
|    ├── modules/
|    |    └──           <- 'modules' package
|    └── setup.py       <- Makes 'deeplight', `hcprep`, and `modules` pip-installable (pip install -e .)  

Installation

deeplight and hcprep are written for python 3.6 and require a working python environment running on your computer (we generally recommend pyenv for python version management).

First, clone and switch to this repository:

git clone https://github.com/athms/evaluating-deeplight-transfer.git
cd evaluating-deeplight-transfer

This project uses python poetry for dependency management. To install all required dependencies with poetry, run:

poetry install

To then install deeplight, hcprep, and modules in your poetry environment, run:

cd src/
poetry run pip3 install -e .

Packages

HCPrep

hcprep stores the HCP task-fMRI data data locally in the Brain Imaging Data Structure (BIDS) format.

To make fMRI data usable for DL analyses with TensorFlow, hcprep can clean the downloaded fMRI data and store these in the TFRecords data format.

Getting data access: To download the HCP task-fMRI data, you will need AWS access to the HCP public data directory. A detailed instruction can be found here. Make sure to safely store the ACCESS_KEY and SECRET_KEY; they are required to access the data via the AWS S3 storage system.

AWS configuration: Setup your local AWS client (as described here) and add the following profile to '~/.aws/config'

[profile hcp]
region=eu-central-1

Choose the region based on your location.

TFR data storage: hcprep stores the preprocessed fMRI data locally in TFRecords format, with one entry for each input fMRI volume of the data, each containing the following features:

  • volume: the flattened voxel activations with shape 91x109x91 (flattened over the X (91), Y (109), and Z (91) dimensions)
  • task_id, subject_id, run_id: numerical id of task, subject, and run
  • tr: TR of the volume in the underlying experimental task
  • state: numerical label of the cognive state associated with the volume within its task (e.g., [0,1,2,3] for the four cognitive states of the working memory task)
  • onehot: one-hot encoding of the state across all experimental tasks that are used for training (e.g., there are 20 cognitive tasks across the seven experimental tasks of the HCP; the four cognitive states of the working memory task could thus be mapped to the last four positions of the one-hot encoding, with indices [16: 0, 17: 1, 18: 2, 19: 3])

Note that hcprep also provides basic descriptive information about the HCP task-fMRI data in info.basics:

hcp_info = hcprep.info.basics()

basics contains the following information:

  • tasks: names of all HCP experimental tasks ('EMOTION', 'GAMBLING', 'LANGUAGE', 'MOTOR', 'RELATIONAL', 'SOCIAL', 'WM')
  • subjects: dictionary containing 1000 subject IDs for each task
  • runs: run IDs ('LR', 'RL')
  • t_r: repetition time of the fMRI data in seconds (0.72)
  • states_per_task: dictionary containing the label of each cognitive state of each task
  • onehot_idx_per_task: index that is used to assign cognitive states of each task to the onehotencoding of the TFR-files (see onehot above)

For further details on the experimental tasks and their cognitive states, click here.

DeepLight

deeplight implements two DeepLight architectures ("2D" and "3D"), which can be accessed as deeplight.two (2D) and deeplight.three (3D).

Importantly, both DeepLight architectures operate on the level of individual whole-brain fMRI volumes (e.g., individual TRs).

2D-DeepLight: A whole-brain fMRI volume is first sliced into a sequence of axial 2D-images (from bottom-to-top). These images are passed to a DL model, consisting of a 2D-convolutional feature extractor as well as an LSTM unit and output layer. First, the 2D-convolutional feature extractor reduces the dimensionality of the axial brain images through a sequence of 2D-convolution layers. The resulting sequence of higher-level slice representations is then fed to a bi-directional LSTM, modeling the spatial dependencies of brain activity within and across brain slices. Lastly, 2D-DeepLight outputs a decoding decision about the cognitive state underlying the fMRI volume, through a softmax output layer with one output unit per cognitive state in the data.

3D-DeepLight: The whole-brain fMRI volume is passed to a 3D-convolutional feature extractor, consisting of a sequence of twelve 3D-convolution layers. The 3D-convolutional feature extractor directly projects the fMRI volume into a higher-level, but lower dimensional, representation of whole-brain activity, without the need of an LSTM. To make a decoding decision, 3D-DeepLight utilizes an output layer that is composed of a 1D- convolution and global average pooling layer as well as a softmax activation function. The 1D-convolution layer maps the higher-level representation of whole-brain activity of the 3D-convolutional feature extractor to one representation for each cognitive state in the data, while the global average pooling layer and softmax function then reduce these to a decoding decision.

To interpret the decoding decisions of the two DeepLight architectures, relating their decoding decisions to the fMRI data, deeplight makes use of the LRP technique. The LRP technique decomposes individual decoding decisions of a DL model into the contributions of the individual input features (here individual voxel activities) to these decisions.

Both deeplight architectures implement basic fit, decode, and interpret methods, next to other functionalities. For details on how to {train, decode, interpret} with deeplight, see scripts/.

For further methdological details regarding the two DeepLight architectures, see the upcoming preprint.

Note that we currently recommend to run any applications of interpret with 2D-DeepLight on CPU instead of GPU, due to its high memory demand (assuming that your available CPU memory is larger than your available GPU memory). This switch can be made by setting the environment variable export CUDA_VISIBLE_DEVICES="". We are currently working on reducing the overall memory demand of interpret with 2D-DeepLight and will push a code update soon.

Modules

modules is a fork of the modules module from interprettensor, which deeplight uses to build the 2D-DeepLight architecture. Note that modules is licensed differently from the other python packages in this repository (see modules/LICENSE).

Basic usage

You can find a set of example python scripts in scripts/, which illustrate how to download and preprocess task-fMRI data from the Human Connectome Project with hcprep and how to {train on, decode, interpret} fMRI data with the two DeepLight architectures of deeplight.

You can run individual scripts in your poetryenvironment with:

cd scripts/
poetry run python <SCRIPT NAME>
Owner
Armin Thomas
Ram and Vijay Shriram Data Science Fellow at Stanford Data Science
Armin Thomas
This is the official source code of "BiCAT: Bi-Chronological Augmentation of Transformer for Sequential Recommendation".

BiCAT This is our TensorFlow implementation for the paper: "BiCAT: Sequential Recommendation with Bidirectional Chronological Augmentation of Transfor

John 15 Dec 06, 2022
Pytorch implementation for Patient Knowledge Distillation for BERT Model Compression

Patient Knowledge Distillation for BERT Model Compression Knowledge distillation for BERT model Installation Run command below to install the environm

Siqi 180 Dec 19, 2022
Customised to detect objects automatically by a given model file(onnx)

LabelImg LabelImg is a graphical image annotation tool. It is written in Python and uses Qt for its graphical interface. Annotations are saved as XML

Heeone Lee 1 Jun 07, 2022
[ACMMM 2021 Oral] Enhanced Invertible Encoding for Learned Image Compression

InvCompress Official Pytorch Implementation for "Enhanced Invertible Encoding for Learned Image Compression", ACMMM 2021 (Oral) Figure: Our framework

96 Nov 30, 2022
Official implementation of the paper Chunked Autoregressive GAN for Conditional Waveform Synthesis

PyEmits, a python package for easy manipulation in time-series data. Time-series data is very common in real life. Engineering FSI industry (Financial

Descript 150 Dec 06, 2022
Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"

Time-Sensitive-QA The repo contains the dataset and code for NeurIPS2021 (dataset track) paper Time-Sensitive Question Answering dataset. The dataset

wenhu chen 35 Nov 14, 2022
The code for two papers: Feedback Transformer and Expire-Span.

transformer-sequential This repo contains the code for two papers: Feedback Transformer Expire-Span The training code is structured for long sequentia

Facebook Research 125 Dec 25, 2022
Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

yzf 1 Jun 12, 2022
Editing a classifier by rewriting its prediction rules

This repository contains the code and data for our paper: Editing a classifier by rewriting its prediction rules Shibani Santurkar*, Dimitris Tsipras*

Madry Lab 86 Dec 27, 2022
Code for the published paper : Learning to recognize rare traffic sign

Improving traffic sign recognition by active search This repo contains code for the paper : "Learning to recognise rare traffic signs" How to use this

samsja 4 Jan 05, 2023
A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python.

c is for Camera A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python. The purpose of this project is to explore and underst

Daniele Procida 146 Sep 26, 2022
Google-drive-to-sqlite - Create a SQLite database containing metadata from Google Drive

google-drive-to-sqlite Create a SQLite database containing metadata from Google

Simon Willison 140 Dec 04, 2022
Learning Skeletal Articulations with Neural Blend Shapes

This repository provides an end-to-end library for automatic character rigging and blend shapes generation as well as a visualization tool. It is based on our work Learning Skeletal Articulations wit

Peizhuo 504 Dec 30, 2022
EdiBERT, a generative model for image editing

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
RoFormer_pytorch

PyTorch RoFormer 原版Tensorflow权重(https://github.com/ZhuiyiTechnology/roformer) chinese_roformer_L-12_H-768_A-12.zip (提取码:xy9x) 已经转化为PyTorch权重 chinese_r

yujun 283 Dec 12, 2022
Source code for paper "Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling", AAAI 2021

ATLOP Code for AAAI 2021 paper Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling. If you make use of this co

Wenxuan Zhou 146 Nov 29, 2022
ImageNet Adversarial Image Evaluation

ImageNet Adversarial Image Evaluation This repository contains the code and some materials used in the experimental work presented in the following pa

Utku Ozbulak 11 Dec 26, 2022
Python Jupyter kernel using Poetry for reproducible notebooks

Poetry Kernel Use per-directory Poetry environments to run Jupyter kernels. No need to install a Jupyter kernel per Python virtual environment! The id

Pathbird 204 Jan 04, 2023
MLPs for Vision and Langauge Modeling (Coming Soon)

MLP Architectures for Vision-and-Language Modeling: An Empirical Study MLP Architectures for Vision-and-Language Modeling: An Empirical Study (Code wi

Yixin Nie 27 May 09, 2022
A Fast and Accurate One-Stage Approach to Visual Grounding, ICCV 2019 (Oral)

One-Stage Visual Grounding ***** New: Our recent work on One-stage VG is available at ReSC.***** A Fast and Accurate One-Stage Approach to Visual Grou

Zhengyuan Yang 118 Dec 05, 2022