Code-free deep segmentation for computational pathology

Overview

NoCodeSeg: Deep segmentation made easy!

This is the official repository for the manuscript "Code-free development and deployment of deep segmentation models for digital pathology", submitted to Frontiers in Medicine.

The repository contains trained deep models for epithelium segmentation of HE and CD3 immunostained WSIs, as well as source code relevant for importing/exporting annotations/predictions in QuPath, from DeepMIB, and FastPathology. See here for how to download the 251 annotated WSIs.

Getting started

Watch the video

A video tutorial of the proposed pipeline was published on YouTube. It demonstrates the steps for:

  • Downloading and installing the softwares
  • QuPath
    • Create a project, then export annotations as patches with label files
    • Export patches from unannotated images for prediction in DeepMIB
    • (later) Import predictions for MIB and FastPathology as annotations
  • MIB
    • Use the annotated patches/labels exported from QuPath
    • Configuring and training deep segmentation models (i.e. U-Net/SegNet)
    • Use the trained U-net to predict unannotated patches exported from QuPath
    • Export trained models into the ONNX format for use in FastPathology
  • FastPathology
    • Importing and creating a configuration file for the DeepMIB exported ONNX model
    • Create a project and load WSIs into a project
    • Use the U-Net ONNX model to render predictions on top of the WSI in real time
    • Export full sized WSI tiffs for import into QuPath

Data

The 251 annotated WSIs are being processed before publishing on DataverseNO, where it will be made openly available for anyone

Reading annotations

The annotations are stored as tiled, pyramidal TIFFs, which makes it easy to generate patches from the data without the need for any preprocessing. Reading these files and working with them to generate training data, is already described in the tutorial video above.

TL;DR: Load TIFF as annotations in QuPath using provided groovy script and exporting these as labelled tiles.

Reading annotation in Python

However, if you wish to use Python, the annotations can be read exactly the same way as regular WSIs (for instance using OpenSlide):

import openslide

reader = ops.OpenSlide("path-to-annotation-image.tiff")
patch = reader.read_region(location=(x, y), level, size=(w, h))
reader.close()

Pixels here will be one-to-one with the original WSI. To generate patches for training, it is also possible to use pyFAST, which does the patching for you. For an example see here.

Citation

Please, consider citing our paper, if you find the work useful:

  @misc{pettersen2021codefree,
  title={Code-free development and deployment of deep segmentation models for digital pathology}, 
  author={Henrik Sahlin Pettersen and Ilya Belevich and Elin Synnøve Røyset and Erik Smistad and Eija Jokitalo and Ingerid Reinertsen and Ingunn Bakke and André Pedersen},
  year={2021},
  eprint={2111.08430},
  archivePrefix={arXiv},
  primaryClass={q-bio.QM}}
Comments
  • Create model multiple classes

    Create model multiple classes

    Hi @andreped

    Thanks for this great software and workflow. I would like to generate a model which classifies the epithelia in the colon from WSIs into villi (usually at the boundary) and crypts (circular structures). Essentially, there would be 3 classes,

    • background
    • villi and
    • crypts

    My plan is to use your models to generate annotations from my WSIs, import into QuPath, correct them, and then split annotations into different classes.

    How do I configure image export for 3 different classes from QuPath, train in MIB and then run on FastPathology?

    Is it possible to do this and is this a good workflow idea?

    Cheers Pradeep

    enhancement good first issue 
    opened by pr4deepr 19
  • Add instructions on how to run WSI-level predictions

    Add instructions on how to run WSI-level predictions

    As our TileImporter script does not support multi-class, the alternative method is to run predictions on WSI level.

    This requires you to do something slightly different when running predictions from what was done in the tutorial video.

    However, I don't see that there is any documentations for this. This should be added to assist users.

    enhancement 
    opened by andreped 10
  • Import tiles script QuPath

    Import tiles script QuPath

    Hi @andreped When I try the import pyramidal tiff script to improve annotations generated by FastPathology, the annotations are smaller by 4 times compared to the WSI. I can't enter a value to control for downsample in the new script anymore. It looks like the script has been updated compared to the on in the video I've reused the old version of the script to import annotations for now .

    Cheers Pradeep

    bug enhancement 
    opened by pr4deepr 7
  • Export Tiles from QuPath with multiple classes

    Export Tiles from QuPath with multiple classes

    First some context: I am attempting to obtain multi-class segmentation of cerebellum tissue with different cellular layers corresponding to separate classes. I have annotated four classes: 'EGL', 'Molecular layer', 'IGL', and 'WM' on QuPath. I tested export tiles with multiple classes from QuPath using two the two different scripts available. Here are the issues I am having with both:

    1. The script by @andreped is giving me an error:

    ERROR: MissingMethodException at line 61: No signature of method: qupath.lib.images.writers.TileExporter.labeledServer() is applicable for argument types: (qupath.lib.images.servers.LabeledImageServer$Builder) values: [[email protected]] Possible solutions: labeledServer(qupath.lib.images.servers.ImageServer)

    ERROR: org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:70) org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:46) org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:139) Script16.run(Script16.groovy:62) org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:317) org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:155) qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:982) qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:914) qupath.lib.gui.scripting.DefaultScriptEditor.executeScript(DefaultScriptEditor.java:829) qupath.lib.gui.scripting.DefaultScriptEditor$2.run(DefaultScriptEditor.java:1345) java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) java.base/java.util.concurrent.FutureTask.run(Unknown Source) java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) java.base/java.lang.Thread.run(Unknown Source)

    1. The script by @pr4deepr is yielding what I need (see below), however, many of the tiles are just background. Is there a way to incorporate the glassthreshold in the script by @andreped to avoid the tiles with too much background? Additionally, in order to get this script to work, I had to edit this line to .multichannelOutput(false).

    20211026_Javid_01Image_01 vsi - 20x  d=5 79044,x=10376,y=36318,w=2965,h=2964

    bug 
    opened by aaronsathya 6
  • generic multi-class support?

    generic multi-class support?

    The current importTiles script does not seem to support importing patches of multiple labels. This one is quite annoying to add multi-class support for, but is something we could try to do early next week? Let us schedule a session.

    There have been implemented multi-class variants for the other scripts (i.e., exportTiles and importPyramidalTIFF), but these have not been tested in various scenarios. Currently, these are split into separate scripts; one for single-class and one for multi-class, but I think the multi-class scripts might actually handle the single-class situation as well. Could you run some checks in your pipeline to see if the multi-class scripts are sufficient?

    enhancement 
    opened by andreped 4
  • Result from FastPathology fails to import in QuPath

    Result from FastPathology fails to import in QuPath

    This was observed and discussed in another Issue in the FastPathology repo: https://github.com/AICAN-Research/FAST-Pathology/issues/45

    Opening an Issue here to track the issue.

    This is likely due to the changed in how the predictions are stored on disk in the new FastPathology version v1.0.1.

    bug 
    opened by andreped 2
  • Download count is not updating

    Download count is not updating

    Currently, the current download count in the download shield is fixed.

    That is because there was no easy way to catch the current count through markdown.

    I attempted to create a JS and Python solution for doing that, which were to be run periodically through github actions, but that would result in lots of redundant commits, that are not relevant for the user.

    I'll have it fixed until I have time to come up with a better solution.

    enhancement help wanted 
    opened by andreped 1
  • multiclass_export_tiles

    multiclass_export_tiles

    The main change is multichannelOutput is set to true and there is a label export for each threshold.

    // Create an ImageServer where the pixels are derived from annotations
    //added option for multi-class export
    def labelServer = new LabeledImageServer.Builder(imageData)
        .backgroundLabel(0, ColorTools.WHITE) // Specify background label (usually 0 or 255)
        .downsample(downsample)    // Choose server resolution; this should match the resolution at which tiles are exported
        .addLabel('Epithelia', 1)      // Choose output labels (Define Threshold for each value for each label)
        .addLabel('Crypt', 2)
        .multichannelOutput(true)  // If true, each label is a different channel (required for multiclass probability)
        .build()
    
    

    Ideally, a loop to run through each label would be nice, so we don't have to define it manually

    opened by pr4deepr 0
  • Multi-class tile import?

    Multi-class tile import?

    The current tileImport script does not support multiclass tiles, that is produced prediction tiles generated from MIB by a trained multi-class model.

    It is possible to import predictions from MIB to QuPath by predicting on the full WSI in MIB which produces a stitched prediction image, which can be imported in QuPath using the importStitchedTIFfromMIBscript, similarly as done for FastPathology. And therefore, for the full pipeline a tile importer is not critical for using our pipeline.

    However, there are scenarios where having a tile importer is beneficial over the other alternative, for instance if one only wishes to run prediction on a smaller region of the WSI (based on segmentations in QuPath) - which can be especially helpful if the image is extremely large. This can make inference a lot faster.

    It is possible to do such a workflow in FastPathology, given that you provide the region of interest to run inference on. However, no such method has been made easily accessible. It would also require annotations or a model that can produce these ROI segmentation to use as a preprocessing step for the next patch-wise model.

    Therefore, having multi-class support for the tile importer could be a valuable feature.

    enhancement 
    opened by andreped 1
Releases(download-badge)
  • download-badge(Jun 21, 2022)

  • v1.0.0(Jun 13, 2022)

    Released code relevant for Frontiers article.

    Code is freezed to a release to keep supporting the old versions of FastPathology and QuPath used in the tutorial video.

    Future source code will be made compatible with more recent versions of both softwares.

    Full Changelog: https://github.com/andreped/NoCodeSeg/commits/v1.0.0

    Source code(tar.gz)
    Source code(zip)
Owner
André Pedersen
PhD Candidate in Medical Technology at NTNU | Master of Science at SINTEF Health Research
André Pedersen
ToFFi - Toolbox for Frequency-based Fingerprinting of Brain Signals

ToFFi Toolbox This repository contains "before peer review" version of the software related to the preprint of the publication ToFFi - Toolbox for Fre

4 Aug 31, 2022
Collection of TensorFlow2 implementations of Generative Adversarial Network varieties presented in research papers.

TensorFlow2-GAN Collection of tf2.0 implementations of Generative Adversarial Network varieties presented in research papers. Model architectures will

41 Apr 28, 2022
[NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data

MosaicKD Code for NeurIPS-21 paper "Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data" 1. Motivation Natural images share common l

ZJU-VIPA 37 Nov 10, 2022
Delta Conformity Sociopatterns Analysis - Delta Conformity Sociopatterns Analysis

Delta_Conformity_Sociopatterns_Analysis ∆-Conformity is a local homophily measur

2 Jan 09, 2022
CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching(CVPR2021)

CFNet(CVPR 2021) This is the implementation of the paper CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching, CVPR 2021, Zhelun Shen, Yuch

106 Dec 28, 2022
The fastai deep learning library

Welcome to fastai fastai simplifies training fast and accurate neural nets using modern best practices Important: This documentation covers fastai v2,

fast.ai 23.2k Jan 07, 2023
Safe Bayesian Optimization

SafeOpt - Safe Bayesian Optimization This code implements an adapted version of the safe, Bayesian optimization algorithm, SafeOpt [1], [2]. It also p

Felix Berkenkamp 111 Dec 11, 2022
Stock-Prediction - prediction of stock market movements using sentiment analysis and deep learning.

Stock-Prediction- In this project, we aim to enhance the prediction of stock market movements using sentiment analysis and deep learning. We divide th

5 Jan 25, 2022
A Traffic Sign Recognition Project which can help the driver recognise the signs via text as well as audio. Can be used at Night also.

Traffic-Sign-Recognition In this report, we propose a Convolutional Neural Network(CNN) for traffic sign classification that achieves outstanding perf

Mini Project 64 Nov 19, 2022
Implementation of Shape and Electrostatic similarity metric in deepFMPO.

DeepFMPO v3D Code accompanying the paper "On the value of using 3D-shape and electrostatic similarities in deep generative methods". The paper can be

34 Nov 28, 2022
Dense Prediction Transformers

Vision Transformers for Dense Prediction This repository contains code and models for our paper: Vision Transformers for Dense Prediction René Ranftl,

Intel ISL (Intel Intelligent Systems Lab) 1.3k Dec 28, 2022
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU

Isaac ROS DNN Inference Overview This repository provides two NVIDIA GPU-accelerated ROS2 nodes that perform deep learning inference using custom mode

NVIDIA Isaac ROS 62 Dec 14, 2022
Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral]

Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral] Learning to Disambiguate Strongly In

Zicong Fan 40 Dec 22, 2022
REBEL: Relation Extraction By End-to-end Language generation

REBEL: Relation Extraction By End-to-end Language generation This is the repository for the Findings of EMNLP 2021 paper REBEL: Relation Extraction By

Babelscape 222 Jan 06, 2023
LSSY量化交易系统

LSSY量化交易系统 该项目是本人3年来研究量化慢慢积累开发的一套系统,属于早期作品慢慢修改而来,仅供学习研究,回测分析,实盘交易部分未公开

55 Oct 04, 2022
A PyTorch Library for Accelerating 3D Deep Learning Research

Kaolin: A Pytorch Library for Accelerating 3D Deep Learning Research Overview NVIDIA Kaolin library provides a PyTorch API for working with a variety

NVIDIA GameWorks 3.5k Jan 07, 2023
Official Repo for ICCV2021 Paper: Learning to Regress Bodies from Images using Differentiable Semantic Rendering

[ICCV2021] Learning to Regress Bodies from Images using Differentiable Semantic Rendering Getting Started DSR has been implemented and tested on Ubunt

Sai Kumar Dwivedi 83 Nov 27, 2022
A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.

WILDS is a benchmark of in-the-wild distribution shifts spanning diverse data modalities and applications, from tumor identification to wildlife monitoring to poverty mapping.

P-Lambda 437 Dec 30, 2022
SPTAG: A library for fast approximate nearest neighbor search

SPTAG: A library for fast approximate nearest neighbor search SPTAG SPTAG (Space Partition Tree And Graph) is a library for large scale vector approxi

Microsoft 4.3k Jan 01, 2023
Python codes for Lite Audio-Visual Speech Enhancement.

Lite Audio-Visual Speech Enhancement (Interspeech 2020) Introduction This is the PyTorch implementation of Lite Audio-Visual Speech Enhancement (LAVSE

Shang-Yi Chuang 85 Dec 01, 2022