Segmentation for medical image.

Overview

EfficientSegmentation

Introduction

EfficientSegmentation is an open source, PyTorch-based segmentation framework for 3D medical image.

Features

  • A whole-volume-based coarse-to-fine segmentation framework. The segmentation network is decomposed into different components, including encoder, decoder and context module. Anisotropic convolution block and anisotropic context block are designed for efficient and effective segmentation.
  • Pre-process data in multi-process. Distributed and Apex training support. Postprocess is performed asynchronously in inference stage.

Benchmark

Task Architecture Parameters(MB) Flops(GB) DSC NSC Inference time(s) GPU memory(MB)
FLARE21 BaseUNet 11 812 0.908 0.837 0.92 3183
FLARE21 EfficientSegNet 9 333 0.919 0.848 0.46 2269

Installation

Installation by docker image

  • Download the docker image.
  link: https://pan.baidu.com/s/1UkMwdntwAc5paCWHoZHj9w 
  password:9m3z
  • Put the abdomen CT image in current folder $PWD/inputs/.
  • Run the testing cases with the following code:
docker image load < fosun_aitrox.tgz
nvidia-docker container run --name fosun_aitrox --rm -v $PWD/inputs/:/workspace/inputs/ -v $PWD/outputs/:/workspace/outputs/ fosun_aitrox:latest /bin/bash -c "sh predict.sh"'

Installation step by step

Environment

  • Ubuntu 16.04.12
  • Python 3.6+
  • Pytorch 1.5.0+
  • CUDA 10.0+

1.Git clone

git clone https://github.com/Shanghai-Aitrox-Technology/EfficientSegmentation.git

2.Install Nvidia Apex

  • Perform the following command:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir ./

3.Install dependencies

pip install -r requirements.txt

Get Started

preprocessing

  1. Download FLARE21, resulting in 361 training images and masks, 50 validation images.
  2. Copy image and mask to 'FlareSeg/dataset/' folder.
  3. Edit the 'FlareSeg/data_prepare/config.yaml'. 'DATA_BASE_DIR'(Default: FlareSeg/dataset/) is the base dir of databases. If set the 'IS_SPLIT_5FOLD'(Default: False) to true, 5-fold cross-validation datasets will be generated.
  4. Run the data preprocess with the following command:
python FlareSeg/data_prepare/run.py

The image data and lmdb file are stored in the following structure:

DATA_BASE_DIR directory structure:
├── train_images
   ├── train_000_0000.nii.gz
   ├── train_001_0000.nii.gz
   ├── train_002_0000.nii.gz
   ├── ...
├── train_mask
   ├── train_000.nii.gz
   ├── train_001.nii.gz
   ├── train_002.nii.gz
   ├── ...
└── val_images
    ├── validation_001_0000.nii.gz
    ├── validation_002_0000.nii.gz
    ├── validation_003_0000.nii.gz
    ├── ...
├── file_list
    ├──'train_series_uids.txt', 
    ├──'val_series_uids.txt',
    ├──'lesion_case.txt',
├── db
    ├──seg_raw_train         # The 361 training data information.
    ├──seg_raw_test          # The 50 validation images information.
    ├──seg_train_database    # The default training database.
    ├──seg_val_database      # The default validation database.
    ├──seg_pre-process_database # Temporary database.
    ├──seg_train_fold_1
    ├──seg_val_fold_1
├── coarse_image
    ├──160_160_160
          ├── train_000.npy
          ├── train_001.npy
          ├── ...
├── coarse_mask
    ├──160_160_160
          ├── train_000.npy
          ├── train_001.npy
          ├── ...
├── fine_image
    ├──192_192_192
          ├── train_000.npy
          ├── train_001.npy
          ├──  ...
├── fine_mask
    ├──192_192_192
          ├── train_000.npy
          ├── train_001.npy
          ├── ...

The data information is stored in the lmdb file with the following format:

{
    series_id = {
        'image_path': data.image_path,
        'mask_path': data.mask_path,
        'smooth_mask_path': data.smooth_mask_path,
        'coarse_image_path': data.coarse_image_path,
        'coarse_mask_path': data.coarse_mask_path,
        'fine_image_path': data.fine_image_path,
        'fine_mask_path': data.fine_mask_path
    }
}

Training

Remark: Coarse segmentation is trained on Nvidia GeForce 2080Ti(Number:8) in the experiment, while fine segmentation on Nvidia A100(Number:4). If you use different hardware, please set the "ENVIRONMENT.NUM_GPU", "DATA_LOADER.NUM_WORKER" and "DATA_LOADER.BATCH_SIZE" in 'FlareSeg/coarse_base_seg/config.yaml' and 'FlareSeg/fine_efficient_seg/config.yaml' files.

Coarse segmentation:

  • Edit the 'FlareSeg/coarse_base_seg/config.yaml'
  • Train coarse segmentation with the following command:
cd FlareSeg/coarse_base_seg
sh run.sh

Fine segmentation:

  • Edit the 'FlareSeg/fine_efficient_seg/config.yaml'.
  • Edit the 'FlareSeg/fine_efficient_seg/run.py', set the 'tune_params' for different experiments.
  • Train fine segmentation with the following command:
cd  FlareSeg/fine_efficient_seg
sh run.sh

Inference:

  • The model weights are stored in 'FlareSeg/model_weights/'.
  • Run the inference with the following command:
sh predict.sh

Contact

This repository is currently maintained by Fan Zhang ([email protected]) and Yu Wang ([email protected])

Citation

Acknowledgement

PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision.

PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{CV2018, author = {Donny You ( Donny You 40 Sep 14, 2022

Attention for PyTorch with Linear Memory Footprint

Attention for PyTorch with Linear Memory Footprint Unofficially implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention (+

11 Jan 09, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 06, 2023
Functional deep learning

Pipeline abstractions for deep learning. Full documentation here: https://lf1-io.github.io/padl/ PADL: is a pipeline builder for PyTorch. may be used

LF1 101 Nov 09, 2022
A minimal implementation of Gaussian process regression in PyTorch

pytorch-minimal-gaussian-process In search of truth, simplicity is needed. There exist heavy-weighted libraries, but as you know, we need to go bare b

Sangwoong Yoon 38 Nov 25, 2022
[CVPR 2022] Structured Sparse R-CNN for Direct Scene Graph Generation

Structured Sparse R-CNN for Direct Scene Graph Generation Our paper Structured Sparse R-CNN for Direct Scene Graph Generation has been accepted by CVP

Multimedia Computing Group, Nanjing University 44 Dec 23, 2022
Code from Daniel Lemire, A Better Alternative to Piecewise Linear Time Series Segmentation

PiecewiseLinearTimeSeriesApproximation code from Daniel Lemire, A Better Alternative to Piecewise Linear Time Series Segmentation, SIAM Data Mining 20

Daniel Lemire 21 Oct 27, 2022
MetaBalance: Improving Multi-Task Recommendations via Adapting Gradient Magnitudes of Auxiliary Tasks

MetaBalance: Improving Multi-Task Recommendations via Adapting Gradient Magnitudes of Auxiliary Tasks Introduction This repo contains the pytorch impl

Meta Research 38 Oct 10, 2022
Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.

Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.

Matthias Wright 169 Dec 26, 2022
LSUN Dataset Documentation and Demo Code

LSUN Please check LSUN webpage for more information about the dataset. Data Release All the images in one category are stored in one lmdb database fil

Fisher Yu 426 Jan 02, 2023
Official implementation of "Refiner: Refining Self-attention for Vision Transformers".

RefinerViT This repo is the official implementation of "Refiner: Refining Self-attention for Vision Transformers". The repo is build on top of timm an

101 Dec 29, 2022
Official repository of the paper Privacy-friendly Synthetic Data for the Development of Face Morphing Attack Detectors

SMDD-Synthetic-Face-Morphing-Attack-Detection-Development-dataset Official repository of the paper Privacy-friendly Synthetic Data for the Development

10 Dec 12, 2022
PyDEns is a framework for solving Ordinary and Partial Differential Equations (ODEs & PDEs) using neural networks

PyDEns PyDEns is a framework for solving Ordinary and Partial Differential Equations (ODEs & PDEs) using neural networks. With PyDEns one can solve PD

Data Analysis Center 220 Dec 26, 2022
This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations,

labml.ai Deep Learning Paper Implementations This is a collection of simple PyTorch implementations of neural networks and related algorithms. These i

labml.ai 16.4k Jan 09, 2023
Sign-to-Speech for Sign Language Understanding: A case study of Nigerian Sign Language

Sign-to-Speech for Sign Language Understanding: A case study of Nigerian Sign Language This repository contains the code, model, and deployment config

16 Oct 23, 2022
Voxel Transformer for 3D object detection

Voxel Transformer This is a reproduced repo of Voxel Transformer for 3D object detection. The code is mainly based on OpenPCDet. Introduction We provi

173 Dec 25, 2022
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition This repository contains code for the CVPR2021 paper "Patch-NetV

QVPR 368 Jan 06, 2023
Machine Unlearning with SISA

Machine Unlearning with SISA Lucas Bourtoule, Varun Chandrasekaran, Christopher Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, N

CleverHans Lab 70 Jan 01, 2023
基于Paddle框架的arcface复现

arcface-Paddle 基于Paddle框架的arcface复现 ArcFace-Paddle 本项目基于paddlepaddle框架复现ArcFace,并参加百度第三届论文复现赛,将在2021年5月15日比赛完后提供AIStudio链接~敬请期待 参考项目: InsightFace Padd

QuanHao Guo 16 Dec 15, 2022