Deep Multi-Magnification Network for multi-class tissue segmentation of whole slide images

Related tags

Deep LearningDMMN
Overview

Deep Multi-Magnification Network

This repository provides training and inference codes for Deep Multi-Magnification Network published here. Deep Multi-Magnification Network automatically segments multiple tissue subtypes by a set of patches from multiple magnifications in histopathology whole slide images.

Prerequisites

  • Python 3.6.7
  • Pytorch 1.3.1
  • OpenSlide 1.1.1
  • Albumentations

Training

The main training code is training.py. The trained segmentation model will be saved under runs/ by default.

In addition to config, you may need to update the following variables before running training.py:

  • n_classes: the number of tissue subtype classes + 1
  • train_file and val_file: the list of training and validation patches
    • Slide patches must be stored as /path/slide_tiles/patch_1.jpg, /path/slide_tiles/patch_2.jpg, ... /path/slide_tiles/patch_N.jpg
    • The coresponding label patches must be stored as /path/label_tiles/patch_1.png, /path/label_tiles/patch_2.png, ... /path/label_tiles/patch_N.png
    • train_file and val_file must be formatted as
     /path/,patch_1
     /path/,patch_2
     ...
     /path/,patch_N
    
  • d: the number of pixels of each class in the training set for weighted cross entropy loss function

Note that pixels labeled as class 0 are unannotated and will not contribute to the training.

Inference

The main inference codes are slidereader_coords.py and inference.py. You first need to run slidereader_coords.py to generate patch coordinates to be segmented in input whole slide images. After generating patch coordinates, you may run inference.py to generate segmentation predictions of input whole slide images. The segmentation predictions will be saved under imgs/ by default.

You may need to update the following variables before running slidereader_coords.py:

  • slides_to_read: the list of whole slide images
  • coord_file: an output file listing all patch coordinates

In adition to model_path and out_path, you may need to update the following variables before running inference.py:

  • n_classes: the number of tissue subtype classes + 1
  • test file: the list of patch coordinates generated by slidereader_coords.py
  • data_path: the path where whole slide images are located

Please download the pretrained breast model here.

Note that segmentation predictions will be generated in 4-bit BMP format. The size limit for 4-bit BMP files is 232 pixels.

License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details. (c) MSK

Acknowledgments

Reference

If you find our work useful, please cite our paper:

@article{ho2021,
  title={Deep Multi-Magnification Networks for multi-class breast cancer image segmentation},
  author={Ho, David Joon and Yarlagadda, Dig V.K. and D'Alfonso, Timothy M. and Hanna, Matthew G. and Grabenstetter, Anne and Ntiamoah, Peter and Brogi, Edi and Tan, Lee K. and Fuchs, Thomas J.},
  journal={Computerized Medical Imaging and Graphics},
  year={2021},
  volume={88},
  pages={101866}
}
Owner
Computational Pathology
Computational Pathology
A library that can print Python objects in human readable format

objprint A library that can print Python objects in human readable format Install pip install objprint Usage op Use op() (or objprint()) to print obj

319 Dec 25, 2022
Demo code for ICCV 2021 paper "Sensor-Guided Optical Flow"

Sensor-Guided Optical Flow Demo code for "Sensor-Guided Optical Flow", ICCV 2021 This code is provided to replicate results with flow hints obtained f

10 Mar 16, 2022
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Shunsuke Saito 1.5k Jan 03, 2023
Re-TACRED: Addressing Shortcomings of the TACRED Dataset

Re-TACRED Re-TACRED: Addressing Shortcomings of the TACRED Dataset

George Stoica 40 Dec 10, 2022
OpenVisionAPI server

🚀 Quick start An instance of ova-server is free and publicly available here: https://api.openvisionapi.com Checkout ova-client for a quick demo. Inst

Open Vision API 93 Nov 24, 2022
Learn about Spice.ai with in-depth samples

Samples Learn about Spice.ai with in-depth samples ServerOps - Learn when to run server maintainance during periods of low load Gardener - Intelligent

Spice.ai 16 Mar 23, 2022
Boston House Prediction Valuation Tool

Boston-House-Prediction-Valuation-Tool From Below Anlaysis The Valuation Tool is Designed Correlation Matrix Regrssion Analysis Between Target Vs Pred

0 Sep 09, 2022
This repository contains the official MATLAB implementation of the TDA method for reverse image filtering

ReverseFilter TDA This repository contains the official MATLAB implementation of the TDA method for reverse image filtering proposed in the paper: "Re

Fergaletto 2 Dec 13, 2021
Portfolio asset allocation strategies: from Markowitz to RNNs

Portfolio asset allocation strategies: from Markowitz to RNNs Research project to explore different approaches for optimal portfolio allocation starti

Luigi Filippo Chiara 1 Feb 05, 2022
Machine Learning Framework for Operating Systems - Brings ML to Linux kernel

KML: A Machine Learning Framework for Operating Systems & Storage Systems Storage systems and their OS components are designed to accommodate a wide v

File systems and Storage Lab (FSL) 186 Nov 24, 2022
This library provides an abstraction to perform Model Versioning using Weight & Biases.

Description This library provides an abstraction to perform Model Versioning using Weight & Biases. Features Version a new trained model Promote a mod

Hector Lopez Almazan 2 Jan 28, 2022
[ICCV2021] Learning to Track Objects from Unlabeled Videos

Unsupervised Single Object Tracking (USOT) 🌿 Learning to Track Objects from Unlabeled Videos Jilai Zheng, Chao Ma, Houwen Peng and Xiaokang Yang 2021

53 Dec 28, 2022
Time Dependent DFT in Tamm-Dancoff Approximation

Density Function Theory Program - kspy-tddft(tda) This is an implementation of Time-Dependent Density Functional Theory(TDDFT) using the Tamm-Dancoff

Peter Borthwick 2 Nov 17, 2022
ANEA: Distant Supervision for Low-Resource Named Entity Recognition

ANEA: Distant Supervision for Low-Resource Named Entity Recognition ANEA is a tool to automatically annotate named entities in unlabeled text based on

Saarland University Spoken Language Systems Group 15 Mar 30, 2022
CVPR 2021 Challenge on Super-Resolution Space

Learning the Super-Resolution Space Challenge NTIRE 2021 at CVPR Learning the Super-Resolution Space challenge is held as a part of the 6th edition of

andreas 104 Oct 26, 2022
CRNN With PyTorch

CRNN-PyTorch Implementation of https://arxiv.org/abs/1507.05717

Vadim 4 Sep 01, 2022
Differentiable Wavetable Synthesis

Differentiable Wavetable Synthesis

4 Feb 11, 2022
Code for models used in Bashiri et al., "A Flow-based latent state generative model of neural population responses to natural images".

A Flow-based latent state generative model of neural population responses to natural images Code for "A Flow-based latent state generative model of ne

Sinz Lab 5 Aug 26, 2022
ICCV2021 Oral SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks

Sign-Agnostic Convolutional Occupancy Networks Paper | Supplementary | Video | Teaser Video | Project Page This repository contains the implementation

63 Nov 18, 2022
Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition"

CLIPstyler Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition" Environment Pytorch 1.7.1, Python 3.6 $ c

203 Dec 30, 2022