Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN

Overview

Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN

If you use this code for your research, please cite our paper:

@Article{informatics8020040,
AUTHOR = {Altini, Nicola and De Giosa, Giuseppe and Fragasso, Nicola and Coscia, Claudia and Sibilano, Elena and Prencipe, Berardino and Hussain, Sardar Mehboob and Brunetti, Antonio and Buongiorno, Domenico and Guerriero, Andrea and Tatò, Ilaria Sabina and Brunetti, Gioacchino and Triggiani, Vito and Bevilacqua, Vitoantonio},
TITLE = {Segmentation and Identification of Vertebrae in CT Scans Using CNN, k-Means Clustering and k-NN},
JOURNAL = {Informatics},
VOLUME = {8},
YEAR = {2021},
NUMBER = {2},
ARTICLE-NUMBER = {40},
URL = {https://www.mdpi.com/2227-9709/8/2/40},
ISSN = {2227-9709},
DOI = {10.3390/informatics8020040}
}

Graphical Abstract: GraphicalAbstract


Materials

Dataset can be downloaded for free at this URL.


Configuration and pre-processing

Configure the file config/paths.py according to paths in your computer. Kindly note that base_dataset_dir should be an absolute path which points to the directory which contains the subfolders with images and labels for training and validating the algorithms present in this repository.

In order to perform pre-processing, execute the following scripts in the given order.

  1. Perform Train / Test split:
python run/task0/split.py --original-training-images=OTI --original-training-labels=OTL \ 
                          --original-validation-images=OVI --original-validation-labels=OVL

Where:

  • OTI is the path with the CT scan from the original dataset (downloaded from VerSe challenge, see link above);
  • OTL is the path with the labels related to the original dataset;
  • OVI is the path where test images will be put;
  • OVL is the path where test labels will be put.
  1. Cropping the splitted datasets:
python run/task0/crop_mask.py --original-training-images=OTI --original-training-labels=OTL \ 
                              --original-validation-images=OVI --original-validation-labels=OVL

Where the arguments are the same of 1).

  1. Pre-processing the cropped datasets (see also Payer et al. pre-processing):
python run/task0/pre_processing.py

Binary Segmentation

In order to perform this stage, 3D V-Net has been exploited. The followed workflow for binary segmentation is depicted in the following figure:

BinarySegmentationWorkflowImage

Training

To perform the training, the syntax is as follows:

python run/task1/train.py --epochs=NUM_EPOCHS --batch=BATCH_SIZE --workers=NUM_WORKERS \
                          --lr=LR --val_epochs=VAL_EPOCHS

Where:

  • NUM_EPOCHS is the number of epochs for which training the CNN (we often used 500 or 1000 in our experiments);
  • BATCH_SIZE is the batch size (we often used 8 in our experiments, in order to benefit from BatchNormalization layers);
  • NUM_WORKERS is the number of workers in the data loading (see PyTorch documentation);
  • LR is the learning rate,
  • VAL_EPOCHS is the number of epochs after which performing validation during training (a checkpoint model is also saved every VAL_EPOCHS epochs).

Inference

To perform the inference, the syntax is as follows:

python run/task1/segm_bin.py --path_image_in=PATH_IMAGE_IN --path_mask_out=PATH_MASK_OUT

Where:

  • PATH_IMAGE_IN is the folder with input images;
  • PATH_MASK_OUT is the folder where to write output masks.

An example inference result is depicted in the following figure:

BinarySegmentationInferenceImage

Metrics Calculation

In order to calculate binary segmentation metrics, the syntax is as follows:

python run/task1/metrics.py

Multiclass Segmentation

The followed workflow for multiclass segmentation is depicted in the following figure:

MultiClassSegmentationWorkflow

To perform the Multiclass Segmentation (can be performed only on binary segmentation output), the syntax is as follows:

python run/task2/multiclass_segmentation.py --input-path=INPUT_PATH \
                                            --gt-path=GT_PATH \
                                            --output-path=OUTPUT_PATH \
                                            --use-inertia-tensor=INERTIA \
                                            --no-metrics=NOM

Where:

  • INPUT_PATH is the path to the folder containing the binary spine masks obtained in previous steps (or binary spine ground truth).
  • GT_PATH is the path to the folder containing ground truth labels.
  • OUTPUT_PATH is the path where to write the output multiclass masks.
  • INERTIA can be either 0 or 1 depending or not if you want to include inertia tensor in the feature set for discrminating between bodies and arches (useful for scoliosis cases); default is 0.
  • NOM can be either 0 or 1 depending or not if you want to skip the calculation of multi-Hausdorff distance and multi-ASSD for the vertebrae labelling (it can be very computationally expensive with this implementation); default is 1.

Figures highlighting the different steps involved in this stage follows:

  • Morphology MultiClassSegmentationMorphology

  • Connected Components MultiClassSegmentationConnectedComponents

  • Clustering and arch/body coupling MultiClassSegmentationClustering

  • Centroids computation MultiClassSegmentationCentroids

  • Mesh reconstruction MultiClassSegmentationMesh


Visualization of the Predictions

The base_dataset_dir folder also contains the outputs folders:

  • predTr contains the binary segmentation predictions performed on training set;
  • predTs contains the binary segmentation predictions performed on testing set;
  • predMulticlass contains the multiclass segmentation predictions and the JSON files containing the centroids' positions.

working repo for my xumx-sliCQ submissions to the ISMIR 2021 MDX

Music Demixing Challenge - xumx-sliCQ This repository is the GitHub mirror of my working submission repository for the AICrowd ISMIR 2021 Music Demixi

4 Aug 25, 2021
IsoGCN code for ICLR2021

IsoGCN The official implementation of IsoGCN, presented in the ICLR2021 paper Isometric Transformation Invariant and Equivariant Graph Convolutional N

horiem 39 Nov 25, 2022
Companion code for the paper "Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks" by Yatsura et al.

META-RS This is the companion code for the paper "Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks" by Yatsu

Bosch Research 7 Dec 09, 2022
Awesome Artificial Intelligence, Machine Learning and Deep Learning as we learn it

Awesome Artificial Intelligence, Machine Learning and Deep Learning as we learn it. Study notes and a curated list of awesome resources of such topics.

mani 1.2k Jan 07, 2023
Mini-hmc-jax - A simple implementation of Hamiltonian Monte Carlo in JAX

mini-hmc-jax This is a simple implementation of Hamiltonian Monte Carlo in JAX t

Martin Marek 6 Mar 03, 2022
3D-Transformer: Molecular Representation with Transformer in 3D Space

3D-Transformer: Molecular Representation with Transformer in 3D Space

55 Dec 19, 2022
An Open-Source Tool for Automatic Disease Diagnosis..

OpenMedicalChatbox An Open-Source Package for Automatic Disease Diagnosis. Overview Due to the lack of open source for existing RL-base automated diag

8 Nov 08, 2022
CowHerd is a partially-observed reinforcement learning environment

CowHerd is a partially-observed reinforcement learning environment, where the player walks around an area and is rewarded for milking cows. The cows try to escape and the player can place fences to h

Danijar Hafner 6 Mar 06, 2022
This tutorial repository is to introduce the functionality of KGTK to first-time users

Welcome to the KGTK notebook tutorial The goal of this tutorial repository is to introduce the functionality of KGTK to first-time users. The Knowledg

USC ISI I2 58 Dec 21, 2022
This repo contains the code required to train the multivariate time-series Transformer.

Multi-Variate Time-Series Transformer This repo contains the code required to train the multivariate time-series Transformer. Download the data The No

Gregory Duthé 4 Nov 24, 2022
A benchmark framework for Tensorflow

TensorFlow benchmarks This repository contains various TensorFlow benchmarks. Currently, it consists of two projects: PerfZero: A benchmark framework

1.1k Dec 30, 2022
Compare neural networks by their feature similarity

PyTorch Model Compare A tiny package to compare two neural networks in PyTorch. There are many ways to compare two neural networks, but one robust and

Anand Krishnamoorthy 181 Jan 04, 2023
An open framework for Federated Learning.

Welcome to Intel® Open Federated Learning Federated learning is a distributed machine learning approach that enables organizations to collaborate on m

Intel Corporation 397 Dec 27, 2022
Face Depixelizer based on "PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models" repository.

NOTE We have noticed a lot of concern that PULSE will be used to identify individuals whose faces have been blurred out. We want to emphasize that thi

Denis Malimonov 2k Dec 29, 2022
Object detection, 3D detection, and pose estimation using center point detection:

Objects as Points Object detection, 3D detection, and pose estimation using center point detection: Objects as Points, Xingyi Zhou, Dequan Wang, Phili

Xingyi Zhou 6.7k Jan 03, 2023
Implementation of MeMOT - Multi-Object Tracking with Memory - in Pytorch

MeMOT - Pytorch (wip) Implementation of MeMOT - Multi-Object Tracking with Memory - in Pytorch. This paper is just one in a line of work, but importan

Phil Wang 15 May 09, 2022
Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

M3D-VTON: A Monocular-to-3D Virtual Try-On Network Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network" Paper | Suppl

109 Dec 29, 2022
[NeurIPS 2021] Source code for the paper "Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes"

Qu-ANTI-zation This repository contains the code for reproducing the results of our paper: Qu-ANTI-zation: Exploiting Quantization Artifacts for Achie

Secure AI Systems Lab 8 Mar 26, 2022
Real time sign language recognition

The proposed work aims at converting american sign language gestures into English that can be understood by everyone in real time.

Mohit Kaushik 6 Jun 13, 2022
CLASP - Contrastive Language-Aminoacid Sequence Pretraining

CLASP - Contrastive Language-Aminoacid Sequence Pretraining Repository for creating models pretrained on language and aminoacid sequences similar to C

Michael Pieler 133 Dec 29, 2022