Predictive Modeling on Electronic Health Records(EHR) using Pytorch

Overview

Predictive Modeling on Electronic Health Records(EHR) using Pytorch


Overview

Although there are plenty of repos on vision and NLP models, there are very limited repos on EHR using deep learning that we can find. Here we open source our repo, implementing data preprocessing, data loading, and a zoo of common RNN models. The main goal is to lower the bar of entering this field for researchers. We are not claiming any state-of-the-art performance, though our models are quite competitive (a paper describing our work will be available soon).

Based on existing works (e.g., Dr. AI and RETAIN), we represent electronic health records (EHRs) using the pickled list of list of list, which contain histories of patients' diagnoses, medications, and other various events. We integrated all relevant information of a patient's history, allowing easy subsetting.

Currently, this repo includes the following predictive models: Vanilla RNN, GRU, LSTM, Bidirectional RNN, Bidirectional GRU, Bidirectional LSTM, Dilated RNN, Dilated GRU, Dilated LSTM, QRNN,and T-LSTM to analyze and predict clinical performaces. Additionally we have tutorials comparing perfomance to plain LR, Random Forest.

Pipeline

pipeline

Primary Results

Results Summary

Note this result is over two prediction tasks: Heart Failure (HF) risk and Readmission. We showed simple gated RNNs (GRUs or LSTMs) consistently beat traditional MLs (logistic regression (LR) and Random Forest (RF)). All methods were tuned by Bayesian Optimization. All these are described in this paper.

Folder Organization

  • ehr_pytorch: main folder with modularized components:
    • EHREmb.py: EHR embeddings
    • EHRDataloader.py: a separate module to allow for creating batch preprocessed data with multiple functionalities including sorting on visit length and shuffle batches before feeding.
    • Models.py: multiple different models
    • Utils.py
    • main.py: main execution file
    • tplstm.py: tplstm package file
  • Data
    • toy.train: pickle file of toy data with the same structure (multi-level lists) of our processed Cerner data, can be directly utilized for our models for demonstration purpose;
  • Preprocessing
    • data_preprocessing_v1.py: preprocess the data from dataset to build the required multi-level input structure (clear description of how to run this file is in its document header)
  • Tutorials
    • RNN_tutorials_toy.ipynb: jupyter notebooks with examples on how to run our models with visuals and/or utilize our dataloader as a standalone;
    • HF prediction for Diabetic Patients.ipynb
    • Early Readmission v2.ipynb
  • trained_models examples:
    • hf.trainEHRmodel.log: examples of the output of the model
    • hf.trainEHRmodel.pth: actual trained model
    • hf.trainEHRmodel.st: state dictionary

Data Structure

  • We followed the data structure used in the RETAIN. Encounters may include pharmacy, clinical and microbiology laboratory, admission, and billing information from affiliated patient care locations. All admissions, medication orders and dispensing, laboratory orders, and specimens are date and time stamped, providing a temporal relationship between treatment patterns and clinical information.These clinical data are mapped to the most common standards, for example, diagnoses and procedures are mapped to the International Classification of Diseases (ICD) codes, medimultications information include the national drug codes (NDCs), and laboratory tests are linked to their LOINIC codes.

  • Our processed pickle data: multi-level lists. From most outmost to gradually inside (assume we have loaded them as X)

    • Outmost level: patients level, e.g. X[0] is the records for patient indexed 0
    • 2nd level: patient information indicated in X[0][0], X[0][1], X[0][2] are patient id, disease status (1: yes, 0: no disease), and records
    • 3rd level: a list of length of total visits. Each element will be an element of two lists (as indicated in 4)
    • 4th level: for each row in the 3rd-level list.
      • 1st element, e.g. X[0][2][0][0] is list of visit_time (since last time)
      • 2nd element, e.g. X[0][2][0][1] is a list of codes corresponding to a single visit
    • 5th level: either a visit_time, or a single code
  • An illustration of the data structure is shown below:

data structure

In the implementation, the medical codes are tokenized with a unified dictionary for all patients. data example

  • Notes: as long as you have multi-level list you can use our EHRdataloader to generate batch data and feed them to your model

Paper Reference

The paper upon which this repo was built.

Versions This is Version 0.2, more details in the release notes

Dependencies

  • Pytorch 0.4.0 (All models except T-LSTM are compatible with pytorch version 1.4.0) , Issues appear with pytorch 1.5 solved in 1.6 version
  • Torchqrnn
  • Pynvrtc
  • sklearn
  • Matplotlib (for visualizations)
  • tqdm
  • Python: 3.6+

Usage

  • For preprocessing python data_preprocessing.py The above case and control files each is just a three columns table like pt_id | medical_code | visit/event_date

  • To run our models, directly use (you don't need to separately run dataloader, everything can be specified in args here):

python3 main.py -root_dir<'your folder that contains data file(s)'> -files<['filename(train)' 'filename(valid)' 'filename(test)']> -which_model<'RNN'> -optimizer<'adam'> ....(feed as many args as you please)
  • Example:
python3.7 main.py -root_dir /.../Data/ -files sample.train sample.valid sample.test -input_size 15800 -batch_size 100 -which_model LR -lr 0.01 -eps 1e-06 -L2 1e-04
  • To singly use our dataloader for generating data batches, use:
data = EHRdataFromPickles(root_dir = '../data/', 
                          file = ['toy.train'])
loader =  EHRdataLoader(data, batch_size = 128)

#Note: If you want to split data, you must specify the ratios in EHRdataFromPickles() otherwise, call separate loaders for your seperate data files If you want to shuffle batches before using them, add this line

loader = iter_batch2(loader = loader, len(loader))

otherwise, directly call

for i, batch in enumerate(loader): 
    #feed the batch to do things

Check out this notebook with a step by step guide of how to utilize our package.

Warning

  • This repo is for research purpose. Using it at your own risk.
  • This repo is under GPL-v3 license.

Acknowledgements Hat-tip to:

Comments
  • kaplan meier

    kaplan meier

    I attended your session during ACM-BCB conference. Great presentation! I have one question regarding survival analysis. What is the purpose of the "kaplan meier plot" used in survival analysis in ModelTraining file. Is it some kind of baseline to your actual models or is it shoing that survival probability predicted by best model is same as kaplan meier ?

    opened by mehak25 2
  • Getting embedding error when running main.py with toy.train

    Getting embedding error when running main.py with toy.train

    Hi @ZhiGroup and @lrasmy,

    I am very impressed by this work.

    I am getting the attached error when trying to retrieve the embeddings in the EmbedPatients_MB(self,mb_t, mtd) method when using the toy.train file. I just wanted to test the repo's code with this sample data. Should I not use this file and just follow the ACM-BCB-Tutorial instead to generate the processed data?

    Thank you so much for providing this code and these tutorials, it is very help.

    Best Regards,

    Aaron Reich

    pytorch ehr error

    opened by agr505 1
  • Cell_type option

    Cell_type option

    Currently user can input any cell_type (e.g. celltype of "QRNN" for EHR_RNN model), leading to some mismatch in handling packPadMode.
    => Restrict cell_type option to "RNN", "GRU", "LSTM". => Make cell_type of "QRNN" and "TLSTM" a default for qrnn, tlstm model.

    opened by 2miatran 1
  • Mia test

    Mia test

    MODIFIED PARTS: Main.py

    • Modify codes to take data with split options (split is True => split to train, test, valid, split is False => keep the file and sort)
    • Add model prefix (the hospital name) and suffix (optional: user input) to output file
    • Batch_size is used in EHRdataloader => need to give batch_size parameter to dataloader instead of ut.epochs_run()
    • Results are different due to embedded => No modification. Laila's suggestion: change codes in EHRmb.py
    • Eps (currently not required for current optimizer Adagrad but might need later for other optimzers)
    • n_layer default to 1
    • args = parser.parse_args([])

    Utils:

    • Remove batch_size in all functions
    • Add prefix, suffix to the epochs_run function

    Note: mia_test_1 is first created for testing purpose, please ignore this file.

    opened by 2miatran 1
  • Random results with each run even with setting Random seed

    Random results with each run even with setting Random seed

    Testing GPU performance:

    GPU 0 Run 1: Epoch 1 Train_auc : 0.8716401835745263 , Valid_auc : 0.8244826612068169 ,& Test_auc : 0.8398872287083271 Avg Loss: 0.2813216602802277 Train Time (0m 38s) Eval Time (0m 53s)

    Epoch 2 Train_auc : 0.8938440516209567 , Valid_auc : 0.8162852367127903 ,& Test_auc : 0.836586122995983 Avg Loss: 0.26535209695498146 Train Time (0m 38s) Eval Time (0m 53s)

    Epoch 3 Train_auc : 0.9090785000429356 , Valid_auc : 0.8268489421541162 ,& Test_auc : 0.8355234191881434 Avg Loss: 0.25156350443760556 Train Time (0m 38s) Eval Time (0m 53s) (edited)

    lrasmy [3:27 PM]

    GPU0 Run 2: Epoch 1 Train_auc : 0.870730593956147 , Valid_auc : 0.8267809126014227 ,& Test_auc : 0.8407658238915342 Avg Loss: 0.28322121808926265 Train Time (0m 39s) Eval Time (0m 53s)

    Epoch 2 Train_auc : 0.8918280081196787 , Valid_auc : 0.814092171574357 ,& Test_auc : 0.8360580004715573 Avg Loss: 0.26621529906988145 Train Time (0m 39s) Eval Time (0m 53s)

    Epoch 3 Train_auc : 0.9128840712381358 , Valid_auc : 0.8237124792427901 ,& Test_auc : 0.839372227662688 Avg Loss: 0.2513388389100631 Train Time (0m 39s) Eval Time (0m 54s)

    lrasmy [3:43 PM]

    GPU0 Run 3: Epoch 1 Train_auc : 0.8719306438569514 , Valid_auc : 0.8290540285789691 ,& Test_auc : 0.8416333372040562 Avg Loss: 0.28306034040947753 Train Time (0m 40s) Eval Time (0m 55s)

    Epoch 2 Train_auc : 0.8962238893571299 , Valid_auc : 0.812984847168468 ,& Test_auc : 0.8358539036875299 Avg Loss: 0.26579822269578773 Train Time (0m 39s) Eval Time (0m 54s)

    Epoch 3 Train_auc : 0.9131959085864382 , Valid_auc : 0.824907504397332 ,& Test_auc : 0.8411787765451596 Avg Loss: 0.24994653667012848 Train Time (0m 40s) Eval Time (0m 54s)

    opened by lrasmy 1
Releases(v0.2-Feb20)
  • v0.2-Feb20(Feb 21, 2020)

    This release is offering a faster and more memory efficient code than the previously released version

    Key Changes:

    • Moving paddings and mini-batches related tensors creation to the EHR_dataloader
    • Creating the mini-batches list once before running the epochs
    • Adding RETAIN to the models list
    Source code(tar.gz)
    Source code(zip)
Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21).

ACTION-Net Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21). Getting Started EgoGesture data folder struct

V-Sense 171 Dec 26, 2022
Assessing the Influence of Models on the Performance of Reinforcement Learning Algorithms applied on Continuous Control Tasks

Assessing the Influence of Models on the Performance of Reinforcement Learning Algorithms applied on Continuous Control Tasks This is the master thesi

Giacomo Arcieri 1 Mar 21, 2022
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT

LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval This repository contains source code and pre-trained/fine-tun

Siqi 65 Dec 26, 2022
Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train format

ttopt Description Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train (TT) format and maximu

5 May 23, 2022
InferPy: Deep Probabilistic Modeling with Tensorflow Made Easy

InferPy: Deep Probabilistic Modeling Made Easy InferPy is a high-level API for probabilistic modeling written in Python and capable of running on top

PGM-Lab 141 Oct 13, 2022
A U-Net combined with a variational auto-encoder that is able to learn conditional distributions over semantic segmentations.

Probabilistic U-Net + **Update** + An improved Model (the Hierarchical Probabilistic U-Net) + LIDC crops is now available. See below. Re-implementatio

Simon Kohl 498 Dec 26, 2022
JAX-based neural network library

Haiku: Sonnet for JAX Overview | Why Haiku? | Quickstart | Installation | Examples | User manual | Documentation | Citing Haiku What is Haiku? Haiku i

DeepMind 2.3k Jan 04, 2023
DCA - Official Python implementation of Delaunay Component Analysis algorithm

Delaunay Component Analysis (DCA) Official Python implementation of the Delaunay

Petra Poklukar 9 Sep 06, 2022
[ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators

AMOS This repository contains the scripts for fine-tuning AMOS pretrained models on GLUE and SQuAD 2.0 benchmarks. Paper: Pretraining Text Encoders wi

Microsoft 22 Sep 15, 2022
Arquitetura e Desenho de Software.

S203 Este é um repositório dedicado às aulas de Arquitetura e Desenho de Software, cuja sigla é "S203". E agora, José? Como não tenho muito a falar aq

Fabio 7 Oct 23, 2021
LBK 26 Dec 28, 2022
Official implementation of the NRNS paper: No RL, No Simulation: Learning to Navigate without Navigating

No RL No Simulation (NRNS) Official implementation of the NRNS paper: No RL, No Simulation: Learning to Navigate without Navigating NRNS is a heriarch

Meera Hahn 20 Nov 29, 2022
Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking

Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking We revisit and address issues with Oxford 5k and Paris 6k image retrieval benchm

Filip Radenovic 188 Dec 17, 2022
[CVPR 2021] Forecasting the panoptic segmentation of future video frames

Panoptic Segmentation Forecasting Colin Graber, Grace Tsai, Michael Firman, Gabriel Brostow, Alexander Schwing - CVPR 2021 [Link to paper] We propose

Niantic Labs 44 Nov 29, 2022
Spatial Sparse Convolution Library

SpConv: Spatially Sparse Convolution Library PyPI Install Downloads CPU (Linux Only) pip install spconv CUDA 10.2 pip install spconv-cu102 CUDA 11.1 p

Yan Yan 1.2k Jan 07, 2023
Code for C2-Matching (CVPR2021). Paper: Robust Reference-based Super-Resolution via C2-Matching.

C2-Matching (CVPR2021) This repository contains the implementation of the following paper: Robust Reference-based Super-Resolution via C2-Matching Yum

Yuming Jiang 151 Dec 26, 2022
Official repository for the ISBI 2021 paper Transformer Assisted Convolutional Neural Network for Cell Instance Segmentation

SegPC-2021 This is the official repository for the ISBI 2021 paper Transformer Assisted Convolutional Neural Network for Cell Instance Segmentation by

Datascience IIT-ISM 13 Dec 14, 2022
MBPO (paper: When to trust your model: Model-based policy optimization) in offline RL settings

offline-MBPO This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settings Pape

LxzGordon 1 Oct 24, 2021
Artificial Neural network regression model to predict the energy output in a combined cycle power plant.

Energy_Output_Predictor Artificial Neural network regression model to predict the energy output in a combined cycle power plant. Abstract Energy outpu

1 Feb 11, 2022