MAg: a simple learning-based patient-level aggregation method for detecting microsatellite instability from whole-slide images

Related tags

Deep LearningMAg
Overview

MAg

Paper

This is code and some potentially useful data of the paper MAg: a simple learning-based patient-level aggregation method for detecting microsatellite instability from whole-slide images.

Our paper has been accepted in the IEEE International Symposium on Biomedical Imaging (ISBI) 2022. And the arXiv link is here: https://arxiv.org/abs/2201.04769.

Abstract

MAg: a simple learning-based patient-level aggregation method for detecting microsatellite instability from whole-slide images

The prediction of microsatellite instability (MSI) and microsatellite stability (MSS) is essential in predicting both the treatment response and prognosis of gastrointestinal cancer. In clinical practice, a universal MSI testing is recommended, but the accessibility of such a test is limited. Thus, a more cost-efficient and broadly accessible tool is desired to cover the traditionally untested patients. In the past few years, deep-learning-based algorithms have been proposed to predict MSI directly from haematoxylin and eosin (H&E)-stained whole-slide images (WSIs). Such algorithms can be summarized as (1) patch-level MSI/MSS prediction, and (2) patient-level aggregation. Compared with the advanced deep learning approaches that have been employed for the first stage, only the naïve first-order statistics (e.g., averaging and counting) were employed in the second stage. In this paper, we propose a simple yet broadly generalizable patient-level MSI aggregation (MAg) method to effectively integrate the precious patch-level information. Briefly, the entire probabilistic distribution in the first stage is modeled as histogram-based features to be fused as the final outcome with machine learning (e.g., SVM). The proposed MAg method can be easily used in a plug-and-play manner, which has been evaluated upon five broadly used deep neural networks: ResNet, MobileNetV2, EfficientNet, Dpn and ResNext. From the results, the proposed MAg method consistently improves the accuracy of patient-level aggregation for two publicly available datasets. It is our hope that the proposed method could potentially leverage the low-cost H&E based MSI detection method. The comparison of our method and the two common used method (counting and averaging) is shown below:


The proposed method is shown in figure below:

File structure

Here is the structure of the MAg file:

image

Dataset prepare

1.The whole patch-level datasets can be downloaded from https://zenodo.org/record/2530835#.YXIlO5pBw2z. Each patch in this folder belongs to a patient, and the file name of the patch can be used to get to which patient it belongs. For example, the patch blk-AAAFIYHTSVIE-TCGA-G4-6309-01Z-00-DX1.png belongs to the patient TCGA-G4-6309.

2.We have split the CRC_DX and STAD datasets into training set, validation set and testing set in the patient-level. So after downloading them from the link, please split the dataset according to the patient name we list in the name_patient file.

3.Certainly, if you want to change the way of splitting the data set, you can also split the data set by yourself. For your reference, you can use the code in link https://github.com/jnkather/MSIfromHE/blob/master/step_05_split_train_test.m to do this split.

Data description

For your experiment to go smoothly, this is the description of some data you may use to input or output in the process of reproducing the MAg:

1.In the code 2.0.patch2image_counting.ipynb, you will use the files which supply names of patients and these files are placed in the file /MAg/name_patients/. The names of patients are provided in this folder according to different datasets, sets and classes.

NOTE: in the experiment, you will encounter some patient-level for loops in the code, so please modify the range parameters in the for loops according to the number of patients in different sets and different classes.

2.In the code 2.1.patient-level MAg-SVM_histogram.ipynb and 2.2.patient-level MAg-network.ipynb, you will use histogram-based features as the new training set, testing set and validation set, which will be obtained by 2.0. If you just want to test the performance of MAg instead of doing a complete reproduction, we also provide the histogram-based features in our experiments here: /MAg/datasets, according to different patch-level datasets, sets, models and classes.

3.In order to compare the performance of MAg and other baselines, you may also use the results of other baselines in the code. We have provided the results with counting baseline in this folder: /MAg/results/counting_baselines_results according to different patch-level classification models, which can also been obtained from 2.0.

4.Moreover, for your reference, we provide the results of each patch in this folder: /MAg/results/patch_level_result/.

5.We also provide names of patches in the folder /MAg/name_patch/

How to use MAg?

The code of our method is in the demo file. Follow the steps below, you can easily use MAg to complete training and prediction.

1.Firstly, please use 1.patch-level classification training.ipynb to do patch-level training and get classification models. The Timm library is such a creative invention that it can help you easily complete this training process. For example, if you want to use ResNet18 in this stage, just use the code below after entering the working file:

import timm
!python train.py path_to_your_dataset -d ImageFolder --drop 0.25 --train-split train --val-split validation --pretrained --model resnet18 --num-classes 2 --opt adam --lr 1e-6 --hflip 0.5 --epochs 40 -b 32 --output path_to_your_model

The script train.py and other scripts useful in Timm can be obtained from this link: https://github.com/rwightman/pytorch-image-models Also, here are some very helpful links that teachs you how to use Timm: https://fastai.github.io/timmdocs/ and https://rwightman.github.io/pytorch-image-models/

2.Secondly, after using the above process to obtain the classification model in the patch-level, you can use 2.0.patch2image_counting.ipynb to make the patch-level prediction. In this process, just follow the operation of the code in the notebook and you can get patch-level probabilities and histogram-based features.

3.After getting the patch-level model and patient-level histogram-based features from processes above, you can now train the patient-level classification models. Here we provide two different methods to complete this training, which means you can train it in an SVM with 2.1.patient-level MAg-SVM_histogram.ipynb or in a two-layer fully-connected neural network with 2.2.patient-level MAg-network.ipynb. If your dataset is not very large, we suggest you using the 2.1 while you had better use 2.2 if you have a hugh dataset(e.g, a dataset which contains 100000 patients).

4.Both the 2.1 and the 2.2 contain code that can do a simple testing process. After getting the patient-level classification model, just continue following the code in these two notebooks and you will get the final result (e.g, F1 score, BACC and AUC).

NOTE: the patient-level training also require you to follow the split you did before, so please remember to save the patient-level histogram-based features in xlsx files like train.xlsx, validation.xlsx and test.xlsx

5.If you just want to get the reproduced result according to our parameters of SVM, please use the demo reproduce_demo.ipynb.

6.In the demo file, we also provide some notebooks whose file names start with 0. These demos are used by us in our experiment. Although they are not directly related to the MAg process, we think they may be able to help you in your own experiment. Their roles are different. For example, 0.3.confusion_matrix.ipynb can help you calculate a patient-level confusion matrix. The role of each demo can be viewed at the beginning of their code.

Why not try the MAg_lib!

As you can see, these seemingly complex and illogical jupyter notebooks do not achieve the modularity and portability of MAg. So we provide a very early version of the MAg_lib library and hope it can help you call it directly (Up to now, we only provide the MAg method using SVM. In the future, we may add other ML methods into it). Here are some instrutions and tips that may help you when using the MAg_lib.

  1. In the MAg_lib, in order to achieve a more concise code, we no longer use xlsx format files to store data. Instead, we use dict(or json) format to perform the functions. So in MAg_lib.MAg.convert_format, we provide the functions json_file_to_dict and dict_to_json_file to do the conversion task between dict and json file.

  2. Now let us see how to use the much more concise MAg_lib to achieve the MAg task! The functions in this library have their own unique purposes, so we strongly recommend that you open these files before using them and quickly scan the comments of each function to understand their role and input and output formats, and then combine them according to your needs. Here is an example to use it:

First, you need three json files that associate the sample names with the pathes of the patches, corresponding to the training set, validation set, and test set. The format of the files is like:

ce1e16d7a374731442c75fba0598dc4

Second, you need to do the patch-level prediction with the classification which is the same as the step1 in How to use MAg?. With the function in MAg_lib, you can directly get the dict contains patient_level features (Please remember to do this step in all three set so that you can do the next step!). Here is the example code:

import MAg_lib.modules
import timm
model = timm.create_model(model_name, num_classes = 2,checkpoint_path = path_to_model)
save_features_dict = MAg_lib.modules.MAg.get_feature(model,path_to_step1_json,hist_num = 10)

The save_features_dict is like:

c91c1bd0d4fa8d97c6dea70467fa5d5

In fact, the function we provide can directly perform patient-level prediction on the json file containing the name of patches, that is, if you are not interested in getting the features and want to skip it, please use this function directly and you will get the prediction results:

save_predict_dict = MAg_lib.modules.MAg.patient_predict(model, path_to_test_json, method, hist_num, svm)

NOTE: up to know we provide three choices in the parameter method: 'counting', 'averaging', ''MAg, which represent counting baseline, averaging baseline and our MAg method. And the hist_num and svm are required only when you choose 'MAg'.

Then you can get the dict which contains the final patient-level prediction results. The save_predict_dict is like:

ad6ed3f70b04f28cb4f5d5097f762d2

  1. Then you may ask such a question: How can I get the SVM I need in MAg? The sklearn.svm solve it smoothly. In our initial experiments, we manually adjusted the parameters of the SVM to obtain the best performing one on the validation set to do the prediction task. (Please remember to use MAg_lib.modules.convert_format.convert_feature to convert the json file containing features to the feature list for training and validation)

Here we also provide a naive function similar to the grid search method for parameter optimization for your reference. If you have some better optimization methods, please contact us and we are willing to discuss about this topic:

from MAg_lib.modules.MAg import find_best_svm
best_parameters = find_best_svm(X,y,X_val,y_val,['sigmoid'],C,class_weight)

X,yrepresent training set and X_val,y_val represent validation set. The next three parameters are three lists which provide the kernals, penalty coefficients and class weights you want to let this function try.

And BTW, here is another function which can evaluate the performance of SVM:

eval_dict = MAg_lib.modules.MAg.evaluate(X_val,y_val,svm)

Trained models

In the folder trained models, we provide the parameters of the SVMs in our past experiments that can make the model get the best performance on the validation set.

NOTE: if you want to use our data for stage-2 training, these parameters are only applicable to experiments that do not use oversampling on the MSIMUT class, that is, please just set the number of training samples as 188 in CRC_DX and 124 in STAD and do not use the rest copied samples. Or if you want to try to use oversampling on the MSIMUT class, you are more than welcome to tell us your results.

Experiment and results

The experiments were performed on a Google Colab workstation with a NVIDIA Tesla P100 GPU. In stage I, five prevalent approaches have been used to be the baseline feature extractors, including ResNet, MobileNetV2, EfficientNet, Dpn, and ResNext models. And in stage II, we mainly use SVM to complete it. Moreover, to assess the generalizability, the experiments above were done in both the CRC dataset and the STAD dataset. Below is the results of our experiments and comparison between MAg and two commonly used methods (counting and averaging):


Some supplements

Because our research is still in a very early stage of exploration, our code may have some defects. In the future, we may continue to improve the code, hoping that it can achieve higher portability and modularity. If you encounter any problems in the process of using MAg or have any suggestions for this research, please let us know in github or contact us directly 😊

Owner
Calvin Pang
A green hand in Computer Vision and Deep Learning. Dream of becoming an expert. A good product manager is the soul of a team, and this is what I want to be.
Calvin Pang
Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

MilaGraph 36 Nov 22, 2022
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.

Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition This is the research repository for Vid2

Future Interfaces Group (CMU) 26 Dec 24, 2022
Official implementation for: Blended Diffusion for Text-driven Editing of Natural Images.

Blended Diffusion for Text-driven Editing of Natural Images Blended Diffusion for Text-driven Editing of Natural Images Omri Avrahami, Dani Lischinski

328 Dec 30, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
Improving Calibration for Long-Tailed Recognition (CVPR2021)

MiSLAS Improving Calibration for Long-Tailed Recognition Authors: Zhisheng Zhong, Jiequan Cui, Shu Liu, Jiaya Jia [arXiv] [slide] [BibTeX] Introductio

Jia Research Lab 116 Dec 20, 2022
Generate vibrant and detailed images using only text.

CLIP Guided Diffusion From RiversHaveWings. Generate vibrant and detailed images using only text. See captions and more generations in the Gallery See

Clay M. 401 Dec 28, 2022
An End-to-End Machine Learning Library to Optimize AUC (AUROC, AUPRC).

Logo by Zhuoning Yuan LibAUC: A Machine Learning Library for AUC Optimization Website | Updates | Installation | Tutorial | Research | Github LibAUC a

Optimization for AI 176 Jan 07, 2023
MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens

MSG-Transformer Official implementation of the paper MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens, by Jiemin

Hust Visual Learning Team 68 Nov 16, 2022
MLP-Numpy - A simple modular implementation of Multi Layer Perceptron in pure Numpy.

MLP-Numpy A simple modular implementation of Multi Layer Perceptron in pure Numpy. I used the Iris dataset from scikit-learn library for the experimen

Soroush Omranpour 1 Jan 01, 2022
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 04, 2022
Code and models for "Pano3D: A Holistic Benchmark and a Solid Baseline for 360 Depth Estimation", OmniCV Workshop @ CVPR21.

Pano3D A Holistic Benchmark and a Solid Baseline for 360o Depth Estimation Pano3D is a new benchmark for depth estimation from spherical panoramas. We

Visual Computing Lab, Information Technologies Institute, Centre for Reseach and Technology Hellas 50 Dec 29, 2022
Split your patch similarly to `git add -p` but supporting multiple buckets

split-patch.py This is git add -p on steroids for patches. Given a my.patch you can run ./split-patch.py my.patch You can choose in which bucket to p

102 Oct 06, 2022
Official implementation of EfficientPose

EfficientPose This is the official implementation of EfficientPose. We based our work on the Keras EfficientDet implementation xuannianz/EfficientDet

2 May 17, 2022
Tensorflow 2.x implementation of Panoramic BlitzNet for object detection and semantic segmentation on indoor panoramic images.

Deep neural network for object detection and semantic segmentation on indoor panoramic images. The implementation is based on the papers:

Alejandro de Nova Guerrero 9 Nov 24, 2022
Concept drift monitoring for HA model servers.

{Fast, Correct, Simple} - pick three Easily compare training and production ML data & model distributions Goals Boxkite is an instrumentation library

98 Dec 15, 2022
This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.

Ziqi Yuan 10 Sep 30, 2022
PyTorch implementation of MICCAI 2018 paper "Liver Lesion Detection from Weakly-labeled Multi-phase CT Volumes with a Grouped Single Shot MultiBox Detector"

Grouped SSD (GSSD) for liver lesion detection from multi-phase CT Note: the MICCAI 2018 paper only covers the multi-phase lesion detection part of thi

Sang-gil Lee 36 Oct 12, 2022
NEO: Non Equilibrium Sampling on the orbit of a deterministic transform

NEO: Non Equilibrium Sampling on the orbit of a deterministic transform Description of the code This repo describes the NEO estimator described in the

0 Dec 01, 2021
An experiment to bait a generalized frontrunning MEV bot

Honeypot 🍯 A simple experiment that: Creates a honeypot contract Baits a generalized fronturnning bot with a unique transaction Analyze bot behaviour

0x1355 14 Nov 24, 2022
Implementation of H-UCRL Algorithm

Implementation of H-UCRL Algorithm This repository is an implementation of the H-UCRL algorithm introduced in Curi, S., Berkenkamp, F., & Krause, A. (

Sebastian Curi 25 May 20, 2022