Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation

Overview

Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation

The skip connections in U-Net pass features from the levels of encoder to the ones of decoder in a symmetrical way, which makes U-Net and its variants become state-of-the-art approaches for biomedical image segmentation. However, the U-Net skip connections are unidirectional without considering feedback from the decoder, which may be used to further improve the segmentation performance. In this paper, we exploit the feedback information to recurrently refine the segmentation. We develop a deep bidirectional network based on the least mean square error reconstruction (Lmser) self-organizing network, an early network by folding the autoencoder along the central hidden layer. Such folding makes the neurons on the paired layers between encoder and decoder merge into one, equivalently forming bidirectional skip connections between encoder and decoder. We find that although the feedback links increase the segmentation accuracy, they may bring noise into the segmentation when the network proceeds recurrently. To tackle this issue, we present a gating and masking mechanism on the feedback connections to filter the irrelevant information. Experimental results on MoNuSeg, TNBC, and EM membrane datasets demonstrate that our method are robust and outperforms state-of-the-art methods.

This repository holds the Python implementation of the method described in the paper published in BIBM 2021.

Boheng Cao, Shikui Tu*, Lei Xu, "Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation", BIBM2021

Content

  1. Structure
  2. Requirements
  3. Data
  4. Training
  5. Testing
  6. Acknowledgement

Structure

--checkpoints

# pretrained models

--data

# data for MoNuSeg, TNBC, and EM

--pytorch_version

# code

Requirements

  • Python 3.6 or higher.
  • PIL >= 7.0.0
  • matplotlib >= 3.3.1
  • tqdm >= 4.54.1
  • imgaug >= 0.4.0
  • torch >= 1.5.0
  • torchvision >= 0.6.0

...

Data

The author of BiONet has already gathered data of three datasets (Including EM https://bionets.github.io/Piriform_data.zip).

Please refer to the official website (or project repo) for license and terms of usage.

MoNuSeg: https://monuseg.grand-challenge.org/Data/

TNBC: https://github.com/PeterJackNaylor/DRFNS

We also provide our data (For EM only includes stack 1 and 4) and pretrained models here: https://pan.baidu.com/s/1pHTexUIS8ganD_BwbWoAXA password:sjtu

or

https://drive.google.com/drive/folders/1GJq-AV1L1UNhI2WNMDuynYyGtOYpjQEi?usp=sharing

Training

As an example, for EM segmentation, you can simply run:

python main.py --train_data ./data/EM/train --valid_data ./data/EM/test --exp EM_1 --alpha=0.4

Some of the available arguments are:

Argument Description Default Type
--epochs Training epochs 300 int
--batch_size Batch size 2 int
--steps Steps per epoch 250 int
--lr Learning rate 0.01 float
--lr_decay Learning rate decay 3e-5 float
--iter recurrent iteration 3 int
--train_data Training data path ./data/monuseg/train str
--valid_data Validating data path ./data/monuseg/test str
--valid_dataset Validating dataset type monuseg str
--exp Experiment name(use the same name when testing) 1 str
--evaluate_only If only evaluate using existing model store_true action
--alpha Weight of skip/backward connection 0.4 float

Testing

For MonuSeg and TNBC, you can just use our code to test the model, for example

python main.py --valid_data ./data/tnbc --valid_dataset tnbc --exp your_experiment_id --alpha=0.4 --evaluate_only

For EM, our code can not give the Rand F-score directly, but our code will save the ground truth and result in /checkpoints/your_experiment_id/outputs, you can use the tool ImageJ and code of http://brainiac2.mit.edu/isbi_challenge/evaluation to get Rand F-score.

Acknowledgement

This project would not have been finished without using the codes or files from the following open source projects:

BiONet

Reference

Please cite our work if you find our code/paper is useful to your work.

tbd
Owner
Boheng Cao
SJTU CS
Boheng Cao
Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
Video Corpus Moment Retrieval with Contrastive Learning (SIGIR 2021)

Video Corpus Moment Retrieval with Contrastive Learning PyTorch implementation for the paper "Video Corpus Moment Retrieval with Contrastive Learning"

ZHANG HAO 42 Dec 29, 2022
Vit-ImageClassification - Pytorch ViT for Image classification on the CIFAR10 dataset

Vit-ImageClassification Introduction This project uses ViT to perform image clas

Kaicheng Yang 4 Jun 01, 2022
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning

    VarCLR: Variable Representation Pre-training via Contrastive Learning New: Paper accepted by ICSE 2022. Preprint at arXiv! This repository contain

squaresLab 32 Oct 24, 2022
Tightness-aware Evaluation Protocol for Scene Text Detection

TIoU-metric Release on 27/03/2019. This repository is built on the ICDAR 2015 evaluation code. If you propose a better metric and require further eval

Yuliang Liu 206 Nov 18, 2022
Synthesize photos from PhotoDNA using machine learning 🌱

Ribosome Synthesize photos from PhotoDNA. See the blog post for more information. Installation Dependencies You can install Python dependencies using

Anish Athalye 112 Nov 23, 2022
Code for ICDM2020 full paper: "Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning"

Subg-Con Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning (Jiao et al., ICDM 2020): https://arxiv.org/abs/2009.10273 Over

34 Jul 06, 2022
Code to go with the paper "Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo"

dblmahmc Code to go with the paper "Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo" Requirements: https://github.com

1 Dec 17, 2021
Music Classification: Beyond Supervised Learning, Towards Real-world Applications

Music Classification: Beyond Supervised Learning, Towards Real-world Applications

104 Dec 15, 2022
Goal of the project : Detecting Temporal Boundaries in Sign Language videos

MVA RecVis course final project : Goal of the project : Detecting Temporal Boundaries in Sign Language videos. Sign language automatic indexing is an

Loubna Ben Allal 6 Dec 21, 2022
Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

567 Dec 26, 2022
Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented at RAI 2021.

Can Active Learning Preemptively Mitigate Fairness Issues? Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented a

ElementAI 7 Aug 12, 2022
Official repo for BMVC2021 paper ASFormer: Transformer for Action Segmentation

ASFormer: Transformer for Action Segmentation This repo provides training & inference code for BMVC 2021 paper: ASFormer: Transformer for Action Segme

42 Dec 23, 2022
A web application that provides real time temperature and humidity readings of a house.

About A web application which provides real time temperature and humidity readings of a house. If you're interested in the data collected so far click

Ben Thompson 3 Jan 28, 2022
3D ResNets for Action Recognition (CVPR 2018)

3D ResNets for Action Recognition Update (2020/4/13) We published a paper on arXiv. Hirokatsu Kataoka, Tenga Wakamiya, Kensho Hara, and Yutaka Satoh,

Kensho Hara 3.5k Jan 06, 2023
A Simulated Optimal Intrusion Response Game

Optimal Intrusion Response An OpenAI Gym interface to a MDP/Markov Game model for optimal intrusion response of a realistic infrastructure simulated u

Kim Hammar 10 Dec 09, 2022
Transformer model implemented with Pytorch

transformer-pytorch Transformer model implemented with Pytorch Attention is all you need-[Paper] Architecture Self-Attention self_attention.py class

Mingu Kang 12 Sep 03, 2022
PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Stochastic CSLR This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuou

Zhe Niu 28 Dec 19, 2022
Implementation of fast algorithms for Maximum Spanning Tree (MST) parsing that includes fast ArcMax+Reweighting+Tarjan algorithm for single-root dependency parsing.

Fast MST Algorithm Implementation of fast algorithms for (Maximum Spanning Tree) MST parsing that includes fast ArcMax+Reweighting+Tarjan algorithm fo

Miloš Stanojević 11 Oct 14, 2022
Misc YOLOL scripts for use in the Starbase space sandbox videogame

starbase-misc Misc YOLOL scripts for use in the Starbase space sandbox videogame. Each directory contains standalone YOLOL scripts. They don't really

4 Oct 17, 2021