Our implementation used for the MICCAI 2021 FLARE Challenge titled 'Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements'.

Overview

Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements

Our implementation used for the MICCAI 2021 FLARE Challenge titled Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements.

You need to have the MedicalDataAugmentationTool framework by Christian Payer downloaded and in your PYTHONPATH for the scripts to work.

If you have questions about the code, write me a mail.

Dependencies

The following frameworks/libraries were used in the version as stated. If you run into problems with the libraries, please verify that you have the same version installed.

  • Python 3.9
  • TensorFlow 2.6
  • SimpleITK 2.0
  • Numpy 1.20

Dataset and Preprocessing

The dataset as well as a detailed description of it can be found on the challenge website. Follow the steps described there to download the data.

Define the base_dataset_folder containing the downloaded TrainingImg, TrainingMask and ValidationImg in the script preprocessing/preprocessing.py and execute it to generate TrainingImg_small and TrainingMask_small.

Also, download the setup folder provided in this repository and place it in the base_dataset_folder, the following structure is expected:

.                                       # The `base_dataset_folder` of the dataset
├── TrainingImg                         # Image folder containing all training images
│   ├── train_000_0000.nii.gz            
│   ├── ...                   
│   └── train_360_0000.nii.gz            
├── TrainingMask                        # Image folder containing all training masks
│   ├── train_000.nii.gz            
│   ├── ...                   
│   └── train_360.nii.gz  
├── ValidationImg                       # Image folder containing all validation images
│   ├── validation_000_0000.nii.gz            
│   ├── ...                   
│   └── validation_360_0000.nii.gz  
├── TrainingImg_small                   # Image folder containing all downsampled training images generated by `preprocessing/preprocessing.py`
│   ├── train_000_0000.nii.gz            
│   ├── ...                   
│   └── train_360_0000.nii.gz  
├── TrainingMask_small                  # Image folder containing all downsampled training masks generated by `preprocessing/preprocessing.py`
│   ├── train_000_0000.nii.gz            
│   ├── ...                   
│   └── train_360_0000.nii.gz  
└── setup                               # Setup folder as provided in this repository

Train Models

To train a localization model, run localization/main.py after defining the base_dataset_folder as well as the base_output_folder.

To train a segmentation model, run scn/main.py. Again, base_dataset_folder and base_output_folder need to be set accordingly beforehand.

In both cases in function run(), the variable cv can be set to 0, 1, 2, 3 or 4. The values 1-4 represent the respective cross-validation fold. When choosing 0, all training data is used to train the model, which also deactivates the generation of test outputs.

Further parameters like the number of training iterations (max_iter) and the number of iterations after which to perfrom testing (test_iter) can be modified in __init__() of the MainLoop class.

Generate a SavedModel

To convert a trained network to a SavedModel, the script localization/main_create_model.py respectively scn/main_create_model.py can be used after a model was trained.

Before running the respective script, the variable load_model_base needs to be set to the trained models output folder, e.g., .../localization/cv1/2021-09-27_13-18-59.

Furthermore, load_model_iter should be set to the same value as max_iter used during training the model. The value needs to be set to an iteration for which the network weights have been generated.

Generate tf_utils_module

The script inference/inference_tf_utils_module.py can be used to trace and save the tf.functions used for preprocessing during inference into a SavedModel and generate saved_models/tf_utils_module.

To do so, the input_path and output_path need to be defined in the script. The input_path is expected to contain valid images, we suggest to use the folder ValidationImg.

Inference

The provided inference script can be used to evaluate the performance of our method on unseen data efficiently.

The script inference/inference.py requires that all SavedModels are present in the saved_models folder, i.e., saved_models/localization, saved_models/segmentation and saved_models/tf_utils_module need to contain the respective SavedModel. Either, use the provided SavedModels for inference by copying them from submitted_saved_models to saved_models, or use your own models generated as described above.

Additionally, the input_path and output_path need to be defined in the script. The input_path is expected to contain valid images, we suggest to use the folder ValidationImg.

.                                       # The base folder of this repository.
├── saved_models                        # Required by `inference.py`.
│   ├── localization                    # SavedModel of the localization model.
│   │   ├── assets
│   │   ├── variables
│   │   └── saved_model.pb
│   ├── segmentation                    # SavedModel of the segmentation (scn) model.
│   │   ├── assets
│   │   ├── variables
│   │   └── saved_model.pb
│   └── tf_utils_module                 # SavedModel of the tf.functions used for preprocessing during inference.
│       ├── assets
│       ├── variables
│       └── saved_model.pb
...

Docker

The provided Dockerfile can be used to generate a docker image which can readily be used for inference. The SavedModels are expected in the folder saved_models, either copy the provided SavedModels from submitted_saved_models to saved_models or generate your own. If you have a problem with setting up docker, please refer to the documentation.

To build a docker model, run the following command in the folder containing the Dockerfile.

docker build -t icg .

To run your built docker, use the command below, after defining the input and output directories within the command. We recommend to use ValidationImg as input folder.

If you have multiple GPUs and want to select a specific one to run the docker image, modify /dev/nvidia0 to the respective GPUs identifier, e.g., /dev/nvidia1.

docker container run --gpus all --device /dev/nvidia0 --device /dev/nvidia-uvm --device /dev/nvidia-uvm-tools --device /dev/nvidiactl --name icg --rm -v /PATH/TO/DATASET/ValidationImg/:/workspace/inputs/ -v /PATH/TO/OUTPUT/FOLDER/:/workspace/outputs/ icg:latest /bin/bash -c "sh predict.sh" 

Citation

If you use this code for your research, please cite our paper.

Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements

@article{Thaler2021Efficient,
  title={Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements},
  author={Thaler, Franz and Payer, Christian and Bischof, Horst and {\v{S}}tern, Darko},
  year={2021}
}
Owner
Franz Thaler
Franz Thaler
Cross-lingual Transfer for Speech Processing using Acoustic Language Similarity

Cross-lingual Transfer for Speech Processing using Acoustic Language Similarity Indic TTS Samples can be found at https://peter-yh-wu.github.io/cross-

Peter Wu 1 Nov 12, 2022
Official Implementation of VAT

Semantic correspondence Few-shot segmentation Cost Aggregation Is All You Need for Few-Shot Segmentation For more information, check out project [Proj

Hamacojr 114 Dec 27, 2022
Deep Image Matting implementation in PyTorch

Deep Image Matting Deep Image Matting paper implementation in PyTorch. Differences "fc6" is dropped. Indices pooling. "fc6" is clumpy, over 100 millio

Yang Liu 724 Dec 27, 2022
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain Mingchen Zhuge*, Dehong Gao*, Deng-Ping Fan#, Linbo Jin, Ben Chen, Haoming Zhou, Minghui

248 Dec 04, 2022
FedScale: Benchmarking Model and System Performance of Federated Learning

FedScale: Benchmarking Model and System Performance of Federated Learning (Paper) This repository contains scripts and instructions of building FedSca

268 Jan 01, 2023
A Context-aware Visual Attention-based training pipeline for Object Detection from a Webpage screenshot!

CoVA: Context-aware Visual Attention for Webpage Information Extraction Abstract Webpage information extraction (WIE) is an important step to create k

Keval Morabia 41 Jan 01, 2023
The code for replicating the experiments from the LFI in SSMs with Unknown Dynamics paper.

Likelihood-Free Inference in State-Space Models with Unknown Dynamics This package contains the codes required to run the experiments in the paper. Th

Alex Aushev 0 Dec 27, 2021
Adversarial Reweighting for Partial Domain Adaptation

Adversarial Reweighting for Partial Domain Adaptation Code for paper "Xiang Gu, Xi Yu, Yan Yang, Jian Sun, Zongben Xu, Adversarial Reweighting for Par

12 Dec 01, 2022
The Codebase for Causal Distillation for Language Models.

Causal Distillation for Language Models Zhengxuan Wu*,Atticus Geiger*, Josh Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, Noah D.

Zen 20 Dec 31, 2022
This is the official implementation of "One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval".

CORA This is the official implementation of the following paper: Akari Asai, Xinyan Yu, Jungo Kasai and Hannaneh Hajishirzi. One Question Answering Mo

Akari Asai 59 Dec 28, 2022
Code for the paper titled "Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks" (NeurIPS 2021 Spotlight).

Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks This repository contains the code and pre-trained

Hassan Dbouk 7 Dec 05, 2022
Re-implememtation of MAE (Masked Autoencoders Are Scalable Vision Learners) using PyTorch.

mae-repo PyTorch re-implememtation of "masked autoencoders are scalable vision learners". In this repo, it heavily borrows codes from codebase https:/

Peng Qiao 1 Dec 14, 2021
QSYM: A Practical Concolic Execution Engine Tailored for Hybrid Fuzzing

QSYM: A Practical Concolic Execution Engine Tailored for Hybrid Fuzzing Environment Tested on Ubuntu 14.04 64bit and 16.04 64bit Installation # disabl

gts3.org (<a href=[email protected])"> 581 Dec 30, 2022
Paddle-Skeleton-Based-Action-Recognition - DecoupleGCN-DropGraph, ASGCN, AGCN, STGCN

Paddle-Skeleton-Action-Recognition DecoupleGCN-DropGraph, ASGCN, AGCN, STGCN. Yo

Chenxu Peng 3 Nov 02, 2022
Python package for covariance matrices manipulation and Biosignal classification with application in Brain Computer interface

pyRiemann pyRiemann is a python package for covariance matrices manipulation and classification through Riemannian geometry. The primary target is cla

447 Jan 05, 2023
Learning and Building Convolutional Neural Networks using PyTorch

Image Classification Using Deep Learning Learning and Building Convolutional Neural Networks using PyTorch. Models, selected are based on number of ci

Mayur 126 Dec 22, 2022
Randomized Correspondence Algorithm for Structural Image Editing

===================================== README: Inpainting based PatchMatch ===================================== @Author: Younesse ANDAM @Conta

Younesse 116 Dec 24, 2022
Code for Mesh Convolution Using a Learned Kernel Basis

Mesh Convolution This repository contains the implementation (in PyTorch) of the paper FULLY CONVOLUTIONAL MESH AUTOENCODER USING EFFICIENT SPATIALLY

Yi_Zhou 35 Jan 03, 2023
Yolo algorithm for detection + centroid tracker to track vehicles

Vehicle Tracking using Centroid tracker Algorithm used : Yolo algorithm for detection + centroid tracker to track vehicles Backend : opencv and python

6 Dec 21, 2022
PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation.

PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation. Warning: the master branch might collapse. To ob

559 Dec 14, 2022