Gray Zone Assessment

Overview

Gray Zone Assessment

Get started

  1. Clone github repository
git clone https://github.com/andreanne-lemay/gray_zone_assessment.git
  1. Build docker image
docker build -t gray_zone docker/
  1. Run docker container
docker run -it -v tunnel/to/local/folder:/tunnel --gpus 0 gray_zone:latest bash
  1. Run the following command at the root of the repository to install the modules
cd path/to/gray_zone_assessment
pip install -e .
  1. Train model
python run_model.py -o <outpath/path> -p <resources/training_configs/config.json> -d <image/data/path> -c <path/csv/file.csv>

For more information on the different flags: python run_model.py --help

Configuration file (flag -p or --param-path)

The configuration file is a json file containing the main training parameters.
Some json file examples are located in gray_zone/resources/training_configs/

Required configuration parameters

Parameter Description
architecture Architecture id contained in Densenet or Resnet family. Choice between: 'densenet121', 'densenet169', 'densenet201', 'densenet264', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d', 'wide_resnet50_2', 'wide_resnet101_2'
model_type Choice between "classification", "ordinal", "regression".
loss Loss function id. Choice between 'ce' (Cross entropy), 'mse' (Mean square error), 'l1' (L1), 'bce' (Binary cross entropy), 'coral' (Ordinal loss), 'qwk' (Quadratic weighted kappa).
batch_size Batch size (int).
lr Learning rate (float).
n_epochs Number of training epochs (int).
device Device id (e.g., 'cuda:0', 'cpu') (str).
val_metric Choice between "auc" (average ROC AUC over all classes), "val_loss" (minimum validation loss), "kappa" (linear Cohen's kappa), default "accuracy".
dropout_rate Dropout rate (Necessary for Monte Carlo model's). A dropout rate of 0 will disable dropout. (float).
is_weighted_loss Indicates if the loss is weighted by the number of cases by class (bool).
is_weighted_sampling Indicates if the sampling is weighted by the number of cases by class (bool).
seed Random seed (int).
train_frac Fraction of cases used for training if splitting not already done in csv file, or else the parameter is ignored (float).
test_frac Fraction of cases used for testing if splitting not already done in csv file, or else the parameter is ignored (float).
train_transforms / val_transforms monai training / validation transforms with parameters. Validation transforms are also used during testing (see https://docs.monai.io/en/latest/transforms.html for transform list)

csv file (flag -c or --csv-path)

The provided csv file contains the filename of the images used for training, GT labels (int from 0-n_class), patient ID (str) and split column (containing 'train', 'val' or 'test') (optional).

Example of csv file with the default column names. If the column names are different from the default values, the flags --label-colname, --image-colname, --patient-colname, and --split-colname can be used to indicate the custom column names. There can be more columns in the csv file. All this metadata will be included in predictions.csv and split_df.csv.

image label patient dataset
patient1_000.png 0 patient1 train
patient1_001.png 0 patient1 train
patient2_000.png 2 patient2 val
patient2_001.png 2 patient2 val
patient2_002.png 2 patient2 val
patient3_000.png 1 patient3 test
patient3_001.png 1 patient3 test

Output directory (flag -o or --output-path)


└── output directory                # Output directory specified with `-o`  
    ├──   checkpoints               # All models (one .pth per epoch)  
    |     ├──  checkpoint0.pth   
    |     ├──  ...  
    |     └──  checkpointn.pth   
    ├──   best_metric_model.pth     # Best model based on validation metric  
    ├──   params.json               # Parameters used for training (configuration file)  
    ├──   predictions.csv           # Test results  
    ├──   split_df.csv              # csv file containing image filenames, labels, split and patient id  
    └──   train_record.json         # Record of CLI used to train and other info for reproducibility  
Reinforcement learning library in JAX.

Reinforcement learning library in JAX.

Yicheng Luo 96 Oct 30, 2022
Interpretable-contrastive-word-mover-s-embedding

Interpretable-contrastive-word-mover-s-embedding Paper Datasets Here is a Dropbox link to the datasets used in the paper: https://www.dropbox.com/sh/n

0 Nov 02, 2021
PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation

StyleSpeech - PyTorch Implementation PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation. Status (2021.06.13

Keon Lee 140 Dec 21, 2022
Unofficial PyTorch code for BasicVSR

Dependencies and Installation The code is based on BasicSR, Please install the BasicSR framework first. Pytorch=1.51 Training cd ./code CUDA_VISIBLE_

Long 59 Dec 06, 2022
[Preprint] "Bag of Tricks for Training Deeper Graph Neural Networks A Comprehensive Benchmark Study" by Tianlong Chen*, Kaixiong Zhou*, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, Zhangyang Wang

Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study Codes for [Preprint] Bag of Tricks for Training Deeper Graph

VITA 101 Dec 29, 2022
Code for the Convolutional Vision Transformer (ConViT)

ConViT : Vision Transformers with Convolutional Inductive Biases This repository contains PyTorch code for ConViT. It builds on code from the Data-Eff

Facebook Research 418 Jan 06, 2023
This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).

MoEBERT This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022). Installation Create an

Simiao Zuo 34 Dec 24, 2022
MISSFormer: An Effective Medical Image Segmentation Transformer

MISSFormer Code for paper "MISSFormer: An Effective Medical Image Segmentation Transformer". Please read our preprint at the following link: paper_add

Fong 22 Dec 24, 2022
NaturalCC is a sequence modeling toolkit that allows researchers and developers to train custom models

NaturalCC NaturalCC is a sequence modeling toolkit that allows researchers and developers to train custom models for many software engineering tasks,

159 Dec 28, 2022
Hypersearch weight debugging and losses tutorial

tutorial Activate tensorboard option Running TensorBoard remotely When working on a remote server, you can use SSH tunneling to forward the port of th

1 Dec 11, 2021
X-VLM: Multi-Grained Vision Language Pre-Training

X-VLM: learning multi-grained vision language alignments Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts. Yan Zeng, Xi

Yan Zeng 286 Dec 23, 2022
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
Optimizers-visualized - Visualization of different optimizers on local minimas and saddle points.

Optimizers Visualized Visualization of how different optimizers handle mathematical functions for optimization. Contents Installation Usage Functions

Gautam J 1 Jan 01, 2022
PyTorch implementation for View-Guided Point Cloud Completion

PyTorch implementation for View-Guided Point Cloud Completion

22 Jan 04, 2023
The official repository for BaMBNet

BaMBNet-Pytorch Paper

Junjun Jiang 18 Dec 04, 2022
Earthquake detection via fiber optic cables using deep learning

Earthquake detection via fiber optic cables using deep learning Author: Fantine Huot Getting started Update the submodules After cloning the repositor

Fantine 4 Nov 30, 2022
Code and data of the Fine-Grained R2R Dataset proposed in paper Sub-Instruction Aware Vision-and-Language Navigation

Fine-Grained R2R Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP2020 paper Sub-Instruction Aware Vision-and-Language Navigation. C

YicongHong 34 Nov 15, 2022
A novel benchmark dataset for Monocular Layout prediction

AutoLay AutoLay: Benchmarking Monocular Layout Estimation Kaustubh Mani, N. Sai Shankar, J. Krishna Murthy, and K. Madhava Krishna Abstract In this pa

Kaustubh Mani 39 Apr 26, 2022
Dynamic wallpaper generator.

Wiki • About • Installation About This project is a dynamic wallpaper changer. It waits untill you turn on the music, downloads album cover if it's po

3 Sep 18, 2021
Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"

This is the codebase for the paper: Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs Directory Structur

Peter Hase 19 Aug 21, 2022