This is the source code of the 1st place solution for segmentation task (with Dice 90.32%) in 2021 CCF BDCI challenge.

Overview

1st place solution in CCF BDCI 2021 ULSEG challenge

This is the source code of the 1st place solution for ultrasound image angioma segmentation task (with Dice 90.32%) in 2021 CCF BDCI challenge.

[Challenge leaderboard πŸ† ]

1 Pipeline of our solution

Our solution includes data pre-processing, network training, ensemble inference and data post-processing.

drawing

Ultrasound images of hemangioma segmentation framework

1.1 Data pre-processing

To improve our performance on the leaderboard, 5-fold cross validation is used to evaluate the performance of our proposed method. In our opinion, it is necessary to keep the size distribution of tumor in the training and validation sets. We calculate the tumor area for each image and categorize the tumor size into classes: 1) less than 3200 pixels, 2) less than 7200 pixels and greater than 3200 pixels, and 3) greater than 7200 pixels. These two thresholds, 3200 pixels and 7200 pixels, are close to the tertiles. We divide images in each size grade group into 5 folds and combined different grades of single fold into new single fold. This strategy ensured that final 5 folds had similar size distribution.

drawing

Tumors of different sizes

1.2 Network training

Due to the small size of the training set, for this competition, we chose a lightweight network structure: Linknet with efficientnet-B6 encoder. Following methods are performed in data augmentation (DA): 1) horizontal flipping, 2) vertical flipping, 3) random cropping, 4) random affine transformation, 5) random scaling, 6) random translation, 7) random rotation, and 8) random shearing transformation. In addition, one of the following methods was randomly selected for enhanced data augmentation (EDA): 1) sharpening, 2) local distortion, 3) adjustment of contrast, 4) blurring (Gaussian, mean, median), 5) addition of Gaussian noise, and 6) erasing.

1.3 Ensemble inference

We ensemble five models (five folds) and do test time augmentation (TTA) for each model. TTA generally improves the generalization ability of the segmentation model. In our framework, the TTA includes vertical flipping, horizontal flipping, and rotation of 180 degrees for the segmentation task.

1.4 Data post-processing

We post-processe the obtained binary mask by removing small isolated points (RSIP) and edge median filtering (EMF) . The edge part of our predicted tumor is not smooth enough, which is not quite in line with the manual annotation of the physician, so we adopt a small trick, i.e., we do a median filtering specifically for the edge part, and the experimental results show that this can improve the accuracy of tumor segmentation.

2 Segmentation results on 2021 CCF BDCI dataset

We test our method on 2021 CCD BDCI dataset (215 for training and 107 for testing). The segmentation results of 5-fold CV based on "Linknet with efficientnet-B6 encoder" are as following:

fold Linknet Unet Att-Unet DeeplabV3+ Efficient-b5 Efficient-b6 Resnet-34 DA EDA TTA RSIP EMF Dice (%)
1 √ 85.06
1 √ √ 84.48
1 √ √ 84.72
1 √ √ 84.93
1 √ √ 86.52
1 √ √ 86.18
1 √ √ 86.91
1 √ √ √ 87.38
1 √ √ √ 88.36
1 √ √ √ √ 89.05
1 √ √ √ √ √ 89.20
1 √ √ √ √ √ √ 89.52
E √ √ √ √ √ √ 90.32

3 How to run this code?

Here, we split the whole process into 5 steps so that you can easily replicate our results or perform the whole pipeline on your private custom dataset.

  • step0, preparation of environment
  • step1, run the script preprocess.py to perform the preprocessing
  • step2, run the script train.py to train our model
  • step3, run the script inference.py to inference the test data.
  • step4, run the script postprocess.py to perform the preprocessing.

You should prepare your data in the format of 2021 CCF BDCI dataset, this is very simple, you only need to prepare: two folders store png format images and masks respectively. You can download them from [Homepage].

The complete file structure is as follows:

  |--- CCF-BDCI-2021-ULSEG-Rank1st
      |--- segmentation_models_pytorch_4TorchLessThan120
          |--- ...
          |--- ...
      |--- saved_model
          |--- pred
          |--- weights
      |--- best_model
          |--- best_model1.pth
          |--- ...
          |--- best_model5.pth
      |--- train_data
          |--- img
          |--- label
          |--- train.csv
      |--- test_data
          |--- img
          |--- predict
      |--- dataset.py
      |--- inference.py
      |--- losses.py
      |--- metrics.py
      |--- ploting.py
      |--- preprocess.py
      |--- postprocess.py
      |--- util.py
      |--- train.py
      |--- visualization.py
      |--- requirement.txt

3.1 Step0 preparation of environment

We have tested our code in following environment:

For installing these, run the following code:

pip install -r requirements.txt

3.2 Step1 preprocessing

In step1, you should run the script and train.csv can be generated under train_data fold:

python preprocess.py \
--image_path="./train_data/label" \
--csv_path="./train_data/train.csv"

3.3 Step2 training

With the csv file train.csv, you can directly perform K-fold cross validation (default is 5-fold), and the script uses a fixed random seed to ensure that the K-fold cv of each experiment is repeatable. Run the following code:

python train.py \
--input_channel=1 \
--output_class=1 \
--image_resolution=256 \
--epochs=100 \
--num_workers=2 \
--device=0 \
--batch_size=8 \
--backbone="efficientnet-b6" \
--network="Linknet" \
--initial_learning_rate=1e-7 \
--t_max=110 \
--folds=5 \
--k_th_fold=1 \
--fold_file_list="./train_data/train.csv" \
--train_dataset_path="./train_data/img" \
--train_gt_dataset_path="./train_data/label" \
--saved_model_path="./saved_model" \
--visualize_of_data_aug_path="./saved_model/pred" \
--weights_path="./saved_model/weights" \
--weights="./saved_model/weights/best_model.pth" 

By specifying the parameter k_th_fold from 1 to folds and running repeatedly, you can complete the training of all K folds. After each fold training, you need to copy the .pth file from the weights path to the best_model folder.

3.4 Step3 inference (test)

Before running the script, make sure that you have generated five models and saved them in the best_model folder. Run the following code:

python inference.py \
--input_channel=1 \
--output_class=1 \
--image_resolution=256 \
--device=0 \
--backbone="efficientnet-b6" \
--network="Linknet" \
--weights1="./saved_model/weights/best_model1.pth" \
--weights2="./saved_model/weights/best_model2.pth" \
--weights3="./saved_model/weights/best_model3.pth" \
--weights4="./saved_model/weights/best_model4.pth" \
--weights5="./saved_model/weights/best_model5.pth" \
--test_path="./test_data/img" \
--saved_path="./test_data/predict" 

The results of the model inference will be saved in the predict folder.

3.5 Step4 postprocessing

Run the following code:

python postprocess.py \
--image_path="./test_data/predict" \
--threshood=50 \
--kernel=20 

Alternatively, if you want to observe the overlap between the predicted result and the original image, we also provide a visualization script visualization.py. Modify the image path in the code and run the script directly.

drawing

Visualization of tumor margins

4 Acknowledgement

  • Thanks to the organizers of the 2021 CCF BDCI challenge.
  • Thanks to the 2020 MICCCAI TNSCUI TOP 1 for making the code public.
  • Thanks to qubvel, the author of smg and ttach, all network and TTA used in this code come from his implement.
Owner
Chenxu Peng
Data Science, Deep Learning
Chenxu Peng
Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering

Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering Modular Primitives for High-Performance Differentiable Rendering Samuli

NVIDIA Research Projects 675 Jan 06, 2023
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

83 Jan 01, 2023
Implementation of CSRL from the AAAI2022 paper: Constraint Sampling Reinforcement Learning: Incorporating Expertise For Faster Learning

CSRL Implementation of CSRL from the AAAI2022 paper: Constraint Sampling Reinforcement Learning: Incorporating Expertise For Faster Learning Python: 3

4 Apr 14, 2022
SPRING is a seq2seq model for Text-to-AMR and AMR-to-Text (AAAI2021).

SPRING This is the repo for SPRING (Symmetric ParsIng aNd Generation), a novel approach to semantic parsing and generation, presented at AAAI 2021. Wi

Sapienza NLP group 98 Dec 21, 2022
ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration

ROSITA News & Updates (24/08/2021) Release the demo to perform fine-grained semantic alignments using the pretrained ROSITA model. (15/08/2021) Releas

Vision and Language Group@ MIL 48 Dec 23, 2022
darija <-> english dictionary

darija-dictionary Having advanced IT solutions that are well adapted to the Moroccan context passes inevitably through understanding Moroccan dialect.

DODa 102 Jan 01, 2023
Learning from Synthetic Data with Fine-grained Attributes for Person Re-Identification

Less is More: Learning from Synthetic Data with Fine-grained Attributes for Person Re-Identification Suncheng Xiang Shanghai Jiao Tong University Over

SunchengXiang 68 Dec 13, 2022
This is the code of "Multi-view Contrastive Graph Clustering" in NeurlPS 2021.

MCGC Description This is the code of "Multi-view Contrastive Graph Clustering" in NeurlPS 2021. Datasets Results ACM DBLP IMDB Amazon photos Amazon co

31 Nov 14, 2022
Explainability of the Implications of Supervised and Unsupervised Face Image Quality Estimations Through Activation Map Variation Analyses in Face Recognition Models

Explainable_FIQA_WITH_AMVA Note This is the official repository of the paper: Explainability of the Implications of Supervised and Unsupervised Face I

3 May 08, 2022
Security evaluation module with onnx, pytorch, and SecML.

πŸš€ 🐼 πŸ”₯ PandaVision Integrate and automate security evaluations with onnx, pytorch, and SecML! Installation Starting the server without Docker If you

Maura Pintor 11 Apr 12, 2022
A baseline code for VSPW

A baseline code for VSPW Preparation Download VSPW dataset The VSPW dataset with extracted frames and masks is available here.

28 Aug 22, 2022
Hyperparameter Optimization for TensorFlow, Keras and PyTorch

Hyperparameter Optimization for Keras Talos β€’ Key Features β€’ Examples β€’ Install β€’ Support β€’ Docs β€’ Issues β€’ License β€’ Download Talos radically changes

Autonomio 1.6k Dec 15, 2022
Arxiv harvester - Poor man's simple harvester for arXiv resources

Poor man's simple harvester for arXiv resources This modest Python script takes

Patrice Lopez 5 Oct 18, 2022
Lux AI environment interface for RLlib multi-agents

Lux AI interface to RLlib MultiAgentsEnv For Lux AI Season 1 Kaggle competition. LuxAI repo RLlib-multiagents docs Kaggle environments repo Please let

Jaime 12 Nov 07, 2022
Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.

Tensorflow-Mobile-Generic-Object-Localizer Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label. Ori

Ibai Gorordo 11 Nov 15, 2022
Code repo for "FASA: Feature Augmentation and Sampling Adaptation for Long-Tailed Instance Segmentation" (ICCV 2021)

FASA: Feature Augmentation and Sampling Adaptation for Long-Tailed Instance Segmentation (ICCV 2021) This repository contains the implementation of th

Yuhang Zang 21 Dec 17, 2022
A fast and easy to use, moddable, Python based Minecraft server!

PyMine PyMine - The fastest, easiest to use, Python-based Minecraft Server! Features Note: This list is not always up to date, and doesn't contain all

PyMine 144 Dec 30, 2022
A hybrid framework (neural mass model + ML) for SC-to-FC prediction

The current workflow simulates brain functional connectivity (FC) from structural connectivity (SC) with a neural mass model. Gradient descent is applied to optimize the parameters in the neural mass

Yilin Liu 1 Jan 26, 2022
AirLoop: Lifelong Loop Closure Detection

AirLoop This repo contains the source code for paper: Dasong Gao, Chen Wang, Sebastian Scherer. "AirLoop: Lifelong Loop Closure Detection." arXiv prep

Chen Wang 53 Jan 03, 2023
Contrastively Disentangled Sequential Variational Audoencoder

Contrastively Disentangled Sequential Variational Audoencoder (C-DSVAE) Overview This is the implementation for our C-DSVAE, a novel self-supervised d

Junwen Bai 35 Dec 24, 2022