Segmentation for medical image.

Overview

EfficientSegmentation

Introduction

EfficientSegmentation is an open source, PyTorch-based segmentation framework for 3D medical image.

Features

  • A whole-volume-based coarse-to-fine segmentation framework. The segmentation network is decomposed into different components, including encoder, decoder and context module. Anisotropic convolution block and anisotropic context block are designed for efficient and effective segmentation.
  • Pre-process data in multi-process. Distributed and Apex training support. Postprocess is performed asynchronously in inference stage.

Benchmark

Task Architecture Parameters(MB) Flops(GB) DSC NSC Inference time(s) GPU memory(MB)
FLARE21 BaseUNet 11 812 0.908 0.837 0.92 3183
FLARE21 EfficientSegNet 9 333 0.919 0.848 0.46 2269

Installation

Installation by docker image

  • Download the docker image.
  link: https://pan.baidu.com/s/1UkMwdntwAc5paCWHoZHj9w 
  password:9m3z
  • Put the abdomen CT image in current folder $PWD/inputs/.
  • Run the testing cases with the following code:
docker image load < fosun_aitrox.tgz
nvidia-docker container run --name fosun_aitrox --rm -v $PWD/inputs/:/workspace/inputs/ -v $PWD/outputs/:/workspace/outputs/ fosun_aitrox:latest /bin/bash -c "sh predict.sh"'

Installation step by step

Environment

  • Ubuntu 16.04.12
  • Python 3.6+
  • Pytorch 1.5.0+
  • CUDA 10.0+

1.Git clone

git clone https://github.com/Shanghai-Aitrox-Technology/EfficientSegmentation.git

2.Install Nvidia Apex

  • Perform the following command:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir ./

3.Install dependencies

pip install -r requirements.txt

Get Started

preprocessing

  1. Download FLARE21, resulting in 361 training images and masks, 50 validation images.
  2. Copy image and mask to 'FlareSeg/dataset/' folder.
  3. Edit the 'FlareSeg/data_prepare/config.yaml'. 'DATA_BASE_DIR'(Default: FlareSeg/dataset/) is the base dir of databases. If set the 'IS_SPLIT_5FOLD'(Default: False) to true, 5-fold cross-validation datasets will be generated.
  4. Run the data preprocess with the following command:
python FlareSeg/data_prepare/run.py

The image data and lmdb file are stored in the following structure:

DATA_BASE_DIR directory structure:
├── train_images
   ├── train_000_0000.nii.gz
   ├── train_001_0000.nii.gz
   ├── train_002_0000.nii.gz
   ├── ...
├── train_mask
   ├── train_000.nii.gz
   ├── train_001.nii.gz
   ├── train_002.nii.gz
   ├── ...
└── val_images
    ├── validation_001_0000.nii.gz
    ├── validation_002_0000.nii.gz
    ├── validation_003_0000.nii.gz
    ├── ...
├── file_list
    ├──'train_series_uids.txt', 
    ├──'val_series_uids.txt',
    ├──'lesion_case.txt',
├── db
    ├──seg_raw_train         # The 361 training data information.
    ├──seg_raw_test          # The 50 validation images information.
    ├──seg_train_database    # The default training database.
    ├──seg_val_database      # The default validation database.
    ├──seg_pre-process_database # Temporary database.
    ├──seg_train_fold_1
    ├──seg_val_fold_1
├── coarse_image
    ├──160_160_160
          ├── train_000.npy
          ├── train_001.npy
          ├── ...
├── coarse_mask
    ├──160_160_160
          ├── train_000.npy
          ├── train_001.npy
          ├── ...
├── fine_image
    ├──192_192_192
          ├── train_000.npy
          ├── train_001.npy
          ├──  ...
├── fine_mask
    ├──192_192_192
          ├── train_000.npy
          ├── train_001.npy
          ├── ...

The data information is stored in the lmdb file with the following format:

{
    series_id = {
        'image_path': data.image_path,
        'mask_path': data.mask_path,
        'smooth_mask_path': data.smooth_mask_path,
        'coarse_image_path': data.coarse_image_path,
        'coarse_mask_path': data.coarse_mask_path,
        'fine_image_path': data.fine_image_path,
        'fine_mask_path': data.fine_mask_path
    }
}

Training

Remark: Coarse segmentation is trained on Nvidia GeForce 2080Ti(Number:8) in the experiment, while fine segmentation on Nvidia A100(Number:4). If you use different hardware, please set the "ENVIRONMENT.NUM_GPU", "DATA_LOADER.NUM_WORKER" and "DATA_LOADER.BATCH_SIZE" in 'FlareSeg/coarse_base_seg/config.yaml' and 'FlareSeg/fine_efficient_seg/config.yaml' files.

Coarse segmentation:

  • Edit the 'FlareSeg/coarse_base_seg/config.yaml'
  • Train coarse segmentation with the following command:
cd FlareSeg/coarse_base_seg
sh run.sh

Fine segmentation:

  • Edit the 'FlareSeg/fine_efficient_seg/config.yaml'.
  • Edit the 'FlareSeg/fine_efficient_seg/run.py', set the 'tune_params' for different experiments.
  • Train fine segmentation with the following command:
cd  FlareSeg/fine_efficient_seg
sh run.sh

Inference:

  • The model weights are stored in 'FlareSeg/model_weights/'.
  • Run the inference with the following command:
sh predict.sh

Contact

This repository is currently maintained by Fan Zhang ([email protected]) and Yu Wang ([email protected])

Citation

Acknowledgement

ICRA 2021 "Towards Precise and Efficient Image Guided Depth Completion"

PENet: Precise and Efficient Depth Completion This repo is the PyTorch implementation of our paper to appear in ICRA2021 on "Towards Precise and Effic

232 Dec 25, 2022
VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations

VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations 3D-aware Image Synthesis via Learning Structural and Textura

GenForce: May Generative Force Be with You 116 Dec 26, 2022
Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectrum sensing.

Deep-Learning-based-Spectrum-Sensing Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectru

10 Dec 14, 2022
Code for "Optimizing risk-based breast cancer screening policies with reinforcement learning"

Tempo: Optimizing risk-based breast cancer screening policies with reinforcement learning Introduction This repository was used to develop Tempo, as d

Adam Yala 12 Oct 11, 2022
2021 CCF BDCI 全国信息检索挑战杯(CCIR-Cup)智能人机交互自然语言理解赛道第二名参赛解决方案

2021 CCF BDCI 全国信息检索挑战杯(CCIR-Cup) 智能人机交互自然语言理解赛道第二名解决方案 比赛网址: CCIR-Cup-智能人机交互自然语言理解 1.依赖环境: python==3.8 torch==1.7.1+cu110 numpy==1.19.2 transformers=

JinXiang 22 Oct 29, 2022
Neural-net-from-scratch - A simple Neural Network from scratch in Python using the Pymathrix library

A Simple Neural Network from scratch A Simple Neural Network from scratch in Pyt

Youssef Chafiqui 2 Jan 07, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
Implementation of H-UCRL Algorithm

Implementation of H-UCRL Algorithm This repository is an implementation of the H-UCRL algorithm introduced in Curi, S., Berkenkamp, F., & Krause, A. (

Sebastian Curi 25 May 20, 2022
patchmatch和patchmatchstereo算法的python实现

patchmatch patchmatch以及patchmatchstereo算法的python版实现 patchmatch参考 github patchmatchstereo参考李迎松博士的c++版代码 由于patchmatchstereo没有做任何优化,并且是python的代码,主要是方便解析算

Sanders Bao 11 Dec 02, 2022
It is modified Tensorflow 2.x version of Mask R-CNN

[TF 2.X] Mask R-CNN for Object Detection and Segmentation [Notice] : The original mask-rcnn uses the tensorflow 1.X version. I modified it for tensorf

Milner 34 Nov 09, 2022
This is a simple face recognition mini project that was completed by a team of 3 members in 1 week's time

PeekingDuckling 1. Description This is an implementation of facial identification algorithm to detect and identify the faces of the 3 team members Cla

Eric Kwok 2 Jan 25, 2022
Xview3 solution - XView3 challenge, 2nd place solution

Xview3, 2nd place solution https://iuu.xview.us/ test split aggregate score publ

Selim Seferbekov 24 Nov 23, 2022
202 Jan 06, 2023
Learning where to learn - Gradient sparsity in meta and continual learning

Learning where to learn - Gradient sparsity in meta and continual learning In this paper, we investigate gradient sparsity found by MAML in various co

Johannes Oswald 28 Dec 09, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

AugMax: Adversarial Composition of Random Augmentations for Robust Training Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, an

VITA 112 Nov 07, 2022
⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.

Optimized Einsum Optimized Einsum: A tensor contraction order optimizer Optimized einsum can significantly reduce the overall execution time of einsum

Daniel Smith 653 Dec 30, 2022
"Inductive Entity Representations from Text via Link Prediction" @ The Web Conference 2021

Inductive entity representations from text via link prediction This repository contains the code used for the experiments in the paper "Inductive enti

Daniel Daza 45 Jan 09, 2023
Very large and sparse networks appear often in the wild and present unique algorithmic opportunities and challenges for the practitioner

Sparse network learning with snlpy Very large and sparse networks appear often in the wild and present unique algorithmic opportunities and challenges

Andrew Stolman 1 Apr 30, 2021
This library contains a Tensorflow implementation of the paper Stability Analysis of Unfolded WMMSE for Power Allocation

UWMMSE-stability Tensorflow implementation of Stability Analysis of UWMMSE Overview This library contains a Tensorflow implementation of the paper Sta

Arindam Chowdhury 1 Nov 16, 2022
PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation

StyleSpeech - PyTorch Implementation PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation. Status (2021.06.13

Keon Lee 140 Dec 21, 2022