This code is for eCaReNet: explainable Cancer Relapse Prediction Network.

Related tags

Deep Learningecarenet
Overview

eCaReNet

This code is for eCaReNet: explainable Cancer Relapse Prediction Network. (Towards Explainable End-to-End Prostate Cancer Relapse Prediction from H&E Images Combining Self-Attention MultipleInstance Learning with a Recurrent Neural Network, Dietrich, E., Fuhlert, P., Ernst, A., Sauter, G., Lennartz, M., Stiehl, H. S., Zimmermann, M., Bonn, S. - ML4H 2021)

eCaReNet takes histopathology images (TMA spots) as input and predicts a survival curve and a risk score for individual patients. The network consists of an optional self-attention layer, an RNN and an attention based Mulitple Instance Learning module for explainability. To increase model performance, we suggest to include a binary prediction of a relapse as input to the model. alt text

TL;DR

  • store your dataset information in a .csv file

  • make your own my_config.yaml, following the example in config.yaml

  • run $ python train_model.py with my_config.yaml

Requirements and Installation

  • Python and Tensorflow
  • npm install -g omniboard to view results in browser and inspect experiments

Data preprocessing

All annotations of your images need to be stored in a .csv file with the image path and annotations as columns. You need separate csv files for your training, validation and test sets. Here is an example:

img_path censored relapse_time survived_2years ISUP_score
img1.png 0 80.3 1 3

The columns can be named as you wish, you need to tell python which columns to use in the config file ↓

config file

The config file (config.yaml) is needed to define the directories where the images and training, validation and test .csv files are stored. Further, you can choose whether to train a classification (for M_ISUP or M_Bin) or the survival model eCaReNet, which loss function and optimizer to use. Also the preprocessing is defined here (patching, resizing, ...) Details are found in config.yaml. It is best to create a custom my_config.yaml file and run the code as

$ python train_model.py with my_config.yaml 

You can also change single parameters in the command line like

$ python train_model.py with my_config.yaml general.seed="13" 

Training procedure

As recommended in our paper, we suggest to first train M_ISUP with config_isup.yaml. So train your base network (Inception or other) on a classification task as a transfer learning task. Second, train a binary classifier with config_bin.yaml, choose an appropriate time point to base the decision on. Here, you need to load the pretrained model from step one, do not load Inception or other keras models. For the third step the prediction from model two M_Bin are needed, so please store the information in the .csv file. Then again, load model from step one, and this time include the predictions as additional input and train.

Unittests

For most functions, a unittest is given in the test folder. This can be used to test if the function works correctly after adapting it (e.g. add more functionality or speed up). Further, it can be used for debugging to find errors or to find out what the function is actually doing. This is faster than running the whole code.

Docker

In the docker_context folder, the Dockerfile and requirements are stored. To build a docker image run

$ "docker build -t IMAGE_NAME:DATE docker_context"
$ docker build  -t ecarenet_docker:2021_10 docker_context

To use the image, run something like the following. Needs to be adapted to your own paths and resources

$ docker run --gpus=all --cpuset-cpus=5,6 --user `id --user`:`id --group` -it --rm -v /home/UNAME/PycharmProjects/ecarenet:/opt/project -v /PATH/TO/DATA:/data --network NETWORK_NAME--name MY_DOCKER_CONTAINER ecarenet_docker:2021_10 bash

More information on docker can be found here: https://docs.docker.com/get-started/

sacred

We use sacred (https://sacred.readthedocs.io/en/stable/) and a MongoDB (https://sacred.readthedocs.io/en/stable/observers.html https://www.mongodb.com/) to store all experiments. For each training run, a folder with an increasing id will be created automatically and all information about the run will be stored in that folder, like resulting weights, plots, metrics and losses. Which folder this is, is written in settings/default_settings and in the config in training.model_save_path.

The code works without mongodb, if you want. All results will be stored. If you do want to use the mongodb, you need to run a docker container with a mongoDB:

$ docker run -d -p PORTNUMBER:27017 -v ./my_data_folder:/data/db --name name_of_my_mongodb mongo

Create a network:

$ docker network create NETWORK_NAME

Attach container to network

$ docker network connect NETWORK_NAME name_of_my_mongodb

Then during training, the --network NETWORK_NAME property needs to be set. Use omniboard to inspect results:

$ omniboard -m localhost:PORTNUMBER:sacred

Tensorflow

Using tensorflow tf.data speeds up the data generation and preprocessing steps. If the dataset is very large, it can be cached with tf.cache() either in memory or to a file with tf.cache('/path/to/folder/plus/filename') [in dataset_creation/dataset_main.py]. Using tensorflow, it is also best to not overly use numpy functions or to decorate them with tf.function. Functions decorated with @tf.function will be included in the tensorflow graph in the first step and not be created again and again. For debugging, you need to remove the tf.function decorator, because otherwise the function (and breakpoints inside) will be skipped.

Owner
Institute of Medical Systems Biology
Institute of Medical Systems Biology
ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

AliceMind AliceMind: ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab This repository provides pre-trained encode

Alibaba 1.4k Jan 01, 2023
RoFormer_pytorch

PyTorch RoFormer 原版Tensorflow权重(https://github.com/ZhuiyiTechnology/roformer) chinese_roformer_L-12_H-768_A-12.zip (提取码:xy9x) 已经转化为PyTorch权重 chinese_r

yujun 283 Dec 12, 2022
Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN

Overview PyTorch 0.4.1 | Python 3.6.5 Annotated implementations with comparative introductions for minimax, non-saturating, wasserstein, wasserstein g

Shayne O'Brien 471 Dec 16, 2022
Space Invaders For Python

Space-Invaders Just download or clone the git repository. To run the Space Invader game you need to have pyhton installed in you system. If you dont h

Fei 5 Jul 27, 2022
Exploring Visual Engagement Signals for Representation Learning

Exploring Visual Engagement Signals for Representation Learning Menglin Jia, Zuxuan Wu, Austin Reiter, Claire Cardie, Serge Belongie and Ser-Nam Lim C

Menglin Jia 9 Jul 23, 2022
face2comics by Sxela (Alex Spirin) - face2comics datasets

This is a paired face to comics dataset, which can be used to train pix2pix or similar networks.

Alex 164 Nov 13, 2022
Have you ever wondered how cool it would be to have your own A.I

Have you ever wondered how cool it would be to have your own A.I. assistant Imagine how easier it would be to send emails without typing a single word, doing Wikipedia searches without opening web br

Harsh Gupta 1 Nov 09, 2021
prior-based-losses-for-medical-image-segmentation

Repository for papers: Benchmark: Effect of Prior-based Losses on Segmentation Performance: A Benchmark Midl: A Surprisingly Effective Perimeter-based

Rosana EL JURDI 9 Sep 07, 2022
This code uses generative adversarial networks to generate diverse task allocation plans for Multi-agent teams.

Mutli-agent task allocation This code uses generative adversarial networks to generate diverse task allocation plans for Multi-agent teams. To change

Biorobotics Lab 5 Oct 12, 2022
Code release for our paper, "SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo"

SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo Thomas Kollar, Michael Laskey, Kevin Stone, Brijen Thananjeyan

68 Dec 14, 2022
An AI Assistant More Than a Toolkit

tymon An AI Assistant More Than a Toolkit The reason for creating framework tymon is simple. making AI more like an assistant, helping us to complete

TymonXie 46 Oct 24, 2022
code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification

On Robust Prefix-Tuning for Text Classification Prefix-tuning has drawed much attention as it is a parameter-efficient and modular alternative to adap

Zonghan Yang 12 Nov 30, 2022
MVP Benchmark for Multi-View Partial Point Cloud Completion and Registration

MVP Benchmark: Multi-View Partial Point Clouds for Completion and Registration [NEWS] 2021-07-12 [NEW 🎉 ] The submission on Codalab starts! 2021-07-1

PL 93 Dec 21, 2022
Net2net - Network-to-Network Translation with Conditional Invertible Neural Networks

Net2Net Code accompanying the NeurIPS 2020 oral paper Network-to-Network Translation with Conditional Invertible Neural Networks Robin Rombach*, Patri

CompVis Heidelberg 206 Dec 20, 2022
2021 National Underwater Robotics Vision Optics

2021-National-Underwater-Robotics-Vision-Optics 2021年全国水下机器人算法大赛-光学赛道-B榜精度第18名 (Kilian_Di的团队:A榜[email pro

Di Chang 9 Nov 04, 2022
Mini-hmc-jax - A simple implementation of Hamiltonian Monte Carlo in JAX

mini-hmc-jax This is a simple implementation of Hamiltonian Monte Carlo in JAX t

Martin Marek 6 Mar 03, 2022
A Flow-based Generative Network for Speech Synthesis

WaveGlow: a Flow-based Generative Network for Speech Synthesis Ryan Prenger, Rafael Valle, and Bryan Catanzaro In our recent paper, we propose WaveGlo

NVIDIA Corporation 2k Dec 26, 2022
People log into different sites every day to get information and browse through these sites one by one

HyperLink People log into different sites every day to get information and browse through these sites one by one. And they are exposed to advertisemen

0 Feb 17, 2022
Code of Classification Saliency-Based Rule for Visible and Infrared Image Fusion

CSF Code of Classification Saliency-Based Rule for Visible and Infrared Image Fusion Tips: For testing: CUDA_VISIBLE_DEVICES=0 python main.py For trai

Han Xu 14 Oct 31, 2022
ICCV2021 Oral SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks

Sign-Agnostic Convolutional Occupancy Networks Paper | Supplementary | Video | Teaser Video | Project Page This repository contains the implementation

63 Nov 18, 2022