The Body Part Regression (BPR) model translates the anatomy in a radiologic volume into a machine-interpretable form.

Overview

Copyright © German Cancer Research Center (DKFZ), Division of Medical Image Computing (MIC). Please make sure that your usage of this code is in compliance with the code license: License


Body Part Regression

The Body Part Regression (BPR) model translates the anatomy in a radiologic volume into a machine-interpretable form. Each axial slice maps to a slice score. The slice scores monotonously increase with patient height. In the following figure, you can find example slices for the predicted slice scores: 0, 25, 50, 75, and 100. In each row independent random CT slices are visible with nearly the same target. It can be seen, that the start of the pelvis maps to 0, the upper pelvis region maps to 25, the start of the lungs to 50, the shoulder region to 75, and the head to 100:

decision tree

With the help of a slice-score look-up table, the mapping between certain landmarks to slice scores can be checked. The BPR model learns in a completely self-supervised fashion. There is no need for annotated data for training the model, besides of evaluation purposes.

The BPR model can be used for sorting and labeling radiologic images by body parts. Moreover, it is useful for cropping specific body parts as a pre-processing or post-processing step of medical algorithms. If a body part is invalid for a certain medical algorithm, it can be cropped out before applying the algorithm to the volume.

The Body Part Regression model in this repository is based on the SSBR model from Yan et al. with a few modifications explained in the master thesis "Body Part Regression for CT Volumes".

For CT volumes, a pretrained model for inference exists already. With a simple command from the terminal, the body part information can be calculated for nifti-files.


1. Install package

You can either use conda or just pip to install the bpreg package.

1.1 Install package without conda

  1. Create a new python environment and activate it through:
python -m venv venv_name
source venv_name/bin/activate
  1. Install the package through:
pip install bpreg

1.2 Install package with conda

  1. Create new conda environment and activate environment with:
conda create -n venv_name
conda activate venv_name
  1. Install pip into the environment
conda install pip
  1. Install the package with pip through the command (with your personal anaconda path):
/home/anaconda3/envs/venv_name/bin/pip install bpreg

You can find your personal anaconda path through the command:

which anaconda

Analyze examined body parts

The scope of the pretrained BPR model for CT volumes are body parts from adults from the beginning of the pelvis to the end of the head. Note that due to missing training data, children, pregnant women or legs are not in the scope of the algorithm. To obtain the body part information for nifti-files you need to provide the nifti-files with the file ending *.nii or *.nii.gz in one directory and run the following command:

bpreg_predict -i 
   
     -o 
    

    
   

Tags for the bpreg_predict command:

  • -i (str): input path, origin of nifti-files
  • -o (str): save path for created meta-data json-files
  • --skip (bool): skip already created .json metadata files (default: 1)
  • --model (str): specify model (default: public model from zenodo for CT volumes)
  • --plot (png): create and save plot for each volume with calculated slice score curve.

Through the bpreg_predict command for each nifti-file in the directory input_path a corresponding json-file gets created and saved in the output_path. Moreover, a README file will be saved in the output path, where the information inside the JSON files is explained.

If your input data is not in the nifti-format you can still apply the BPR model by converting the data to a numpy matrix. A tutorial for using the package for CT images in the numpy format can be found in the notebook: docs/notebooks/inference-example-with-npy-arrays.

If you use this model for your work, please make sure to cite the model and the training data as explained at zenodo.

The meta-data files can be used for three main use cases.

  1. Predicting the examined body part
  2. Filter corrupted CT images
  3. Cropping required region from CT images

1. Predicting the examined body part

The label for the predicted examined body part can be found under body part examined tag in the meta-data file. In the following figure, you can find a comparison between the BodyPartExamined tag from the DICOM meta-data header and the predicted body part examined tag from this method. The predicted body part examined tag is more fine-grained and contains less misleading and missing values than the BodyPartExamined tag from the DICOM header:

Pie charts of comparisson between DICOM BodyPartExamined tag and predicted body part examined tag

2. Filter corrupted CT images

Some of the predicted body part examined tags are NONE, which means that the predicted slice score curve for this CT volume looks unexpected (then thevalid z-spacing tag from the meta-data is equal to 0). Based on the NONE tag corrupted CT volumes can be automatically found. In the following, you find in the left a typical CT volume with a corresponding typical slice score curve. Next to the typical CT volume several corrupted CT volumes are shown with the corresponding slice score curves. It can be seen that the slice score curves from the corrupted CT volumes are clearly different from the expected slice score curve. If the slice score curve is looking is monotonously increasing as in the left figure but the predicted body part examined tag is still NONE then this happens because the z-spacing of the CT volume seems to be wrong.

Example figures of slice score curves from corrupted CT images

3. Cropping required region from CT images

The meta-data can be used as well to crop appropriate regions from a CT volume. This can be helpful for medical computer vision algorithms. It can be implemented as a pre-processing or post-processing step and leads to less false-positive predictions in regions which the model has not seen during training: Figure of known region cropping process as pre-processing step or post-processing step for a lung segmentation method


Structure of metadata file

The json-file contains all the metadata regarding the examined body part of the nifti-file. It includes the following tags:

  • cleaned slice-scores: Cleanup of the outcome from the BPR model (smoothing, filtering out outliers).
  • unprocessed slice-scores: Plain outcome of the BPR model.
  • body part examined: Dictionary with the tags: "legs", "pelvis", "abdomen", "chest", "shoulder-neck" and "head". For each body-part, the slice indices are listed, where the body part is visible.
  • body part examined tag: updated tag for BodyPartExamined. Possible values: PELVIS, ABDOMEN, CHEST, NECK, HEAD, HEAD-NECK-CHEST-ABDOMEN-PELVIS, HEAD-NECK-CHEST-ABDOMEN, ...
  • look-up table: reference table to be able to map slice scores to landmarks and vise versa.
  • reverse z-ordering: (0/1) equal to one if patient height decreases with slice index.
  • valid z-spacing: (0/1) equal to one if z-spacing seems to be plausible. The data sanity check is based on the slope of the curve from the cleaned slice-scores.

The information from the meta-data file can be traced back to the unprocessed slice-scores and the look-up table.


Documentation for Body Part Regression

In the docs/notebooks folder, you can find a tutorial on how to use the body part regression model for inference. An example will be presented, were the lungs are detected and cropped automatically from CT volumes. Moreover, a tutorial for training and evaluating a Body Part Regression model can be found.

For a more detailed explanation to the theory behind Body Part Regression and the application use cases have a look into the master thesis "Body Part Regression for CT Images" from Sarah Schuhegger.


Cite Software

Sarah Schuhegger. (2021). MIC-DKFZ/BodyPartRegression: (v1.0). Zenodo. https://doi.org/10.5281/zenodo.5195341

Owner
MIC-DKFZ
Division of Medical Image Computing, German Cancer Research Center (DKFZ)
MIC-DKFZ
Multiple-criteria decision-making (MCDM) with Electre, Promethee, Weighted Sum and Pareto

EasyMCDM - Quick Installation methods Install with PyPI Once you have created your Python environment (Python 3.6+) you can simply type: pip3 install

Labrak Yanis 6 Nov 22, 2022
AdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention

AdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet buil

3.4k Jan 07, 2023
Predict multi paths to a moving person depending on his trajectory history.

Multi-future Trajectory Prediction The project is about using the Multiverse model to make possible multible-future trajectory prediction for a seen p

Said Gamal 1 Jan 18, 2022
METER: Multimodal End-to-end TransformER

METER Code and pre-trained models will be publicized soon. Citation @article{dou2021meter, title={An Empirical Study of Training End-to-End Vision-a

Zi-Yi Dou 257 Jan 06, 2023
Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR, 2019)

Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR 2019) To make better use of given limited labels, we propo

126 Sep 13, 2022
Tzer: TVM Implementation of "Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation (OOPSLA'22)“.

Artifact • Reproduce Bugs • Quick Start • Installation • Extend Tzer Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation This is the s

12 Dec 29, 2022
Polynomial-time Meta-Interpretive Learning

Louise - polynomial-time Program Learning Getting help with Louise Louise's author can be reached by email at Stassa Patsantzis 64 Dec 26, 2022

This repository contains the map content ontology used in narrative cartography

Narrative-cartography-ontology This repository contains the map content ontology used in narrative cartography, which is associated with a submission

Weiming Huang 0 Oct 31, 2021
Starter code for the ICCV 2021 paper, 'Detecting Invisible People'

Detecting Invisible People [ICCV 2021 Paper] [Website] Tarasha Khurana, Achal Dave, Deva Ramanan Introduction This repository contains code for Detect

Tarasha Khurana 28 Sep 16, 2022
[IEEE TPAMI21] MobileSal: Extremely Efficient RGB-D Salient Object Detection [PyTorch & Jittor]

MobileSal IEEE TPAMI 2021: MobileSal: Extremely Efficient RGB-D Salient Object Detection This repository contains full training & testing code, and pr

Yu-Huan Wu 52 Jan 06, 2023
BMVC 2021: This is the github repository for "Few Shot Temporal Action Localization using Query Adaptive Transformers" accepted in British Machine Vision Conference (BMVC) 2021, Virtual

FS-QAT: Few Shot Temporal Action Localization using Query Adaptive Transformer Accepted as Poster in BMVC 2021 This is an official implementation in P

Sauradip Nag 14 Dec 09, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 09, 2022
Mitsuba 2: A Retargetable Forward and Inverse Renderer

Mitsuba Renderer 2 Documentation Mitsuba 2 is a research-oriented rendering system written in portable C++17. It consists of a small set of core libra

Mitsuba Physically Based Renderer 2k Jan 07, 2023
This is the pytorch implementation for the paper: *Learning Accurate Performance Predictors for Ultrafast Automated Model Compression*, which is in submission to TPAMI

SeerNet This is the pytorch implementation for the paper: Learning Accurate Performance Predictors for Ultrafast Automated Model Compression, which is

3 May 01, 2022
MegEngine implementation of YOLOX

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

旷视天元 MegEngine 77 Nov 22, 2022
Unsupervised Domain Adaptation for Nighttime Aerial Tracking (CVPR2022)

Unsupervised Domain Adaptation for Nighttime Aerial Tracking (CVPR2022) Junjie Ye, Changhong Fu, Guangze Zheng, Danda Pani Paudel, and Guang Chen. Uns

Intelligent Vision for Robotics in Complex Environment 91 Dec 30, 2022
A MNIST-like fashion product database. Benchmark

Fashion-MNIST Table of Contents Why we made Fashion-MNIST Get the Data Usage Benchmark Visualization Contributing Contact Citing Fashion-MNIST License

Zalando Research 10.5k Jan 08, 2023
Code of Puregaze: Purifying gaze feature for generalizable gaze estimation, AAAI 2022.

PureGaze: Purifying Gaze Feature for Generalizable Gaze Estimation Description Our work is accpeted by AAAI 2022. Picture: We propose a domain-general

39 Dec 05, 2022
ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

ReConsider ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin

Facebook Research 47 Jul 26, 2022
给yolov5加个gui界面,使用pyqt5,yolov5是5.0版本

博文地址 https://xugaoxiang.com/2021/06/30/yolov5-pyqt5 代码执行 项目中使用YOLOv5的v5.0版本,界面文件是project.ui pip install -r requirements.txt python main.py 图片检测 视频检测

Xu GaoXiang 215 Dec 30, 2022