Pre-trained models for a Cascaded-FCN in caffe and tensorflow that segments

Overview

Cascaded-FCN

This repository contains the pre-trained models for a Cascaded-FCN in caffe and tensorflow that segments the liver and its lesions out of axial CT images and a python wrapper for dense 3D Conditional Random Fields 3D CRFs.

This work was published in MICCAI 2016 paper (arXiv link) titled :

Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional 
Neural Networks and 3D Conditional Random Fields

Caffe

Quick Start

If you want to use our code we offer an docker image, which runs our code and has all dependencies installed including the correct caffe version. After having installed docker and nvidia docker:

sudo GPU=0 nvidia-docker run -v $(pwd):/data -P --net=host --workdir=/Cascaded-FCN -ti --privileged patrickchrist/cascadedfcn bash

And than start jupyter notebook and browse to localhost:8888

jupyter notebook

Tensorflow

Please look at Readme and Documentation at https://github.com/FelixGruen/tensorflow-u-net

Citation

If you have used these models in your research please use the following BibTeX for citation :

@Inbook{Christ2016,
title="Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional Neural Networks and 3D Conditional Random Fields",
author="Christ, Patrick Ferdinand and Elshaer, Mohamed Ezzeldin A. and Ettlinger, Florian and Tatavarty, Sunil and Bickel, Marc and Bilic, Patrick and Rempfler, Markus and Armbruster, Marco and Hofmann, Felix and D'Anastasi, Melvin and Sommer, Wieland H. and Ahmadi, Seyed-Ahmad and Menze, Bjoern H.",
editor="Ourselin, Sebastien and Joskowicz, Leo and Sabuncu, Mert R. and Unal, Gozde and Wells, William",
bookTitle="Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II",
year="2016",
publisher="Springer International Publishing",
address="Cham",
pages="415--423",
isbn="978-3-319-46723-8",
doi="10.1007/978-3-319-46723-8_48",
url="http://dx.doi.org/10.1007/978-3-319-46723-8_48"
}
@ARTICLE{2017arXiv170205970C,
   author = {{Christ}, P.~F. and {Ettlinger}, F. and {Gr{\"u}n}, F. and {Elshaera}, M.~E.~A. and 
	{Lipkova}, J. and {Schlecht}, S. and {Ahmaddy}, F. and {Tatavarty}, S. and 
	{Bickel}, M. and {Bilic}, P. and {Rempfler}, M. and {Hofmann}, F. and 
	{Anastasi}, M.~D and {Ahmadi}, S.-A. and {Kaissis}, G. and {Holch}, J. and 
	{Sommer}, W. and {Braren}, R. and {Heinemann}, V. and {Menze}, B.},
    title = "{Automatic Liver and Tumor Segmentation of CT and MRI Volumes using Cascaded Fully Convolutional Neural Networks}",
  journal = {ArXiv e-prints},
archivePrefix = "arXiv",
   eprint = {1702.05970},
 primaryClass = "cs.CV",
 keywords = {Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence},
     year = 2017,
}
@inproceedings{Christ2017SurvivalNetPP,
  title={SurvivalNet: Predicting patient survival from diffusion weighted magnetic resonance images using cascaded fully convolutional and 3D convolutional neural networks},
  author={Patrick Ferdinand Christ and Florian Ettlinger and Georgios Kaissis and Sebastian Schlecht and Freba Ahmaddy and Felix Gr{\"{u}n and Alexander Valentinitsch and Seyed-Ahmad Ahmadi and Rickmer Braren and Bjoern H. Menze},
  booktitle={ISBI},
  year={2017}
}

Description

This work uses 2 cascaded UNETs,

  1. In step1, a UNET segments the liver from an axial abdominal CT slice. The segmentation output is a binary mask with bright pixels denoting the segmented object. By segmenting all slices in a volume we obtain a 3D segmentation.
  2. (Optional) We enhance the liver segmentation using 3D dense CRF (conditional random field). The resulting enhanced liver segmentation is then used further for step2.
  3. In step2 another UNET takes an enlarged liver slice and segments its lesions.

The input to both networks is 572x572 generated by applying reflection mirroring at all 4 sides of a 388x388 slice. The boundary 92 pixels are reflecting, resulting in (92+388+92)x(92+388+92) = 572x572.

An illustration of the pipeline is shown below :

Illustration of the CascadedFCN pipeline

For detailed Information have a look in our presentation

3D Conditional Random Field 3DCRF

You can find the 3D CRF at 3DCRF-python. Please follow the installation description in the Readme.

License

These models are published with unrestricted use for research and educational purposes. For commercial use, please refer to the paper authors.

simple artificial intelligence utilities

Simple AI Project home: http://github.com/simpleai-team/simpleai This lib implements many of the artificial intelligence algorithms described on the b

921 Dec 08, 2022
Several simple examples for popular neural network toolkits calling custom CUDA operators.

Neural Network CUDA Example Several simple examples for neural network toolkits (PyTorch, TensorFlow, etc.) calling custom CUDA operators. We provide

WeiYang 798 Jan 01, 2023
Official implementation for "QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation" (CVPR 2022)

QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation (CVPR2022) https://arxiv.org/abs/2203.08483 Unpaired image-to-image (I2I

Xueqi Hu 50 Dec 16, 2022
Code for "NeRS: Neural Reflectance Surfaces for Sparse-View 3D Reconstruction in the Wild," in NeurIPS 2021

Code for Neural Reflectance Surfaces (NeRS) [arXiv] [Project Page] [Colab Demo] [Bibtex] This repo contains the code for NeRS: Neural Reflectance Surf

Jason Y. Zhang 234 Dec 30, 2022
Simple image captioning model - CLIP prefix captioning.

CLIP prefix captioning. Inference Notebook: 🥳 New: 🥳 Our technical papar is finally out! Official implementation for the paper "ClipCap: CLIP Prefix

688 Jan 04, 2023
Anonymous implementation of KSL

k-Step Latent (KSL) Implementation of k-Step Latent (KSL) in PyTorch. Representation Learning for Data-Efficient Reinforcement Learning [Paper] Code i

1 Nov 10, 2021
Bayesian regularization for functional graphical models.

BayesFGM Paper: Jiajing Niu, Andrew Brown. Bayesian regularization for functional graphical models. Requirements R version 3.6.3 and up Python 3.6 and

0 Oct 07, 2021
Kaggle: Cell Instance Segmentation

Kaggle: Cell Instance Segmentation The goal of this challenge is to detect cells in microscope images. with simple view on how many cels have been ann

Jirka Borovec 9 Aug 12, 2022
NAS-Bench-x11 and the Power of Learning Curves

NAS-Bench-x11 NAS-Bench-x11 and the Power of Learning Curves Shen Yan, Colin White, Yash Savani, Frank Hutter. NeurIPS 2021. Surrogate NAS benchmarks

AutoML-Freiburg-Hannover 13 Nov 18, 2022
EfficientNetV2 implementation using PyTorch

EfficientNetV2-S implementation using PyTorch Train Steps Configure imagenet path by changing data_dir in train.py python main.py --benchmark for mode

Jahongir Yunusov 86 Dec 29, 2022
Flybirds - BDD-driven natural language automated testing framework, present by Trip Flight

Flybird | English Version 行为驱动开发(Behavior-driven development,缩写BDD),是一种软件过程的思想或者

Ctrip, Inc. 706 Dec 30, 2022
On Nonlinear Latent Transformations for GAN-based Image Editing - PyTorch implementation

On Nonlinear Latent Transformations for GAN-based Image Editing - PyTorch implementation On Nonlinear Latent Transformations for GAN-based Image Editi

Valentin Khrulkov 22 Oct 24, 2022
Finding Donors for CharityML

Finding-Donors-for-CharityML - Investigated factors that affect the likelihood of charity donations being made based on real census data.

Moamen Abdelkawy 1 Dec 30, 2021
Implementation of Shape Generation and Completion Through Point-Voxel Diffusion

Shape Generation and Completion Through Point-Voxel Diffusion Project | Paper Implementation of Shape Generation and Completion Through Point-Voxel Di

Linqi Zhou 103 Dec 29, 2022
Code for binary and multiclass model change active learning, with spectral truncation implementation.

Model Change Active Learning Paper (To Appear) Python code for doing active learning in graph-based semi-supervised learning (GBSSL) paradigm. Impleme

Kevin Miller 1 Jul 24, 2022
Code for Multinomial Diffusion

Code for Multinomial Diffusion Abstract Generative flows and diffusion models have been predominantly trained on ordinal data, for example natural ima

104 Jan 04, 2023
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
Official PyTorch Implementation of Learning Architectures for Binary Networks

Learning Architectures for Binary Networks An Pytorch Implementation of the paper Learning Architectures for Binary Networks (BNAS) (ECCV 2020) If you

Computer Vision Lab. @ GIST 25 Jun 09, 2022
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Kanghyun Choi 21 Nov 03, 2022
PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."

FullSubNet This Git repository for the official PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech E

郝翔 357 Jan 04, 2023