Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation

Overview

Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation

The skip connections in U-Net pass features from the levels of encoder to the ones of decoder in a symmetrical way, which makes U-Net and its variants become state-of-the-art approaches for biomedical image segmentation. However, the U-Net skip connections are unidirectional without considering feedback from the decoder, which may be used to further improve the segmentation performance. In this paper, we exploit the feedback information to recurrently refine the segmentation. We develop a deep bidirectional network based on the least mean square error reconstruction (Lmser) self-organizing network, an early network by folding the autoencoder along the central hidden layer. Such folding makes the neurons on the paired layers between encoder and decoder merge into one, equivalently forming bidirectional skip connections between encoder and decoder. We find that although the feedback links increase the segmentation accuracy, they may bring noise into the segmentation when the network proceeds recurrently. To tackle this issue, we present a gating and masking mechanism on the feedback connections to filter the irrelevant information. Experimental results on MoNuSeg, TNBC, and EM membrane datasets demonstrate that our method are robust and outperforms state-of-the-art methods.

This repository holds the Python implementation of the method described in the paper published in BIBM 2021.

Boheng Cao, Shikui Tu*, Lei Xu, "Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation", BIBM2021

Content

  1. Structure
  2. Requirements
  3. Data
  4. Training
  5. Testing
  6. Acknowledgement

Structure

--checkpoints

# pretrained models

--data

# data for MoNuSeg, TNBC, and EM

--pytorch_version

# code

Requirements

  • Python 3.6 or higher.
  • PIL >= 7.0.0
  • matplotlib >= 3.3.1
  • tqdm >= 4.54.1
  • imgaug >= 0.4.0
  • torch >= 1.5.0
  • torchvision >= 0.6.0

...

Data

The author of BiONet has already gathered data of three datasets (Including EM https://bionets.github.io/Piriform_data.zip).

Please refer to the official website (or project repo) for license and terms of usage.

MoNuSeg: https://monuseg.grand-challenge.org/Data/

TNBC: https://github.com/PeterJackNaylor/DRFNS

We also provide our data (For EM only includes stack 1 and 4) and pretrained models here: https://pan.baidu.com/s/1pHTexUIS8ganD_BwbWoAXA password:sjtu

or

https://drive.google.com/drive/folders/1GJq-AV1L1UNhI2WNMDuynYyGtOYpjQEi?usp=sharing

Training

As an example, for EM segmentation, you can simply run:

python main.py --train_data ./data/EM/train --valid_data ./data/EM/test --exp EM_1 --alpha=0.4

Some of the available arguments are:

Argument Description Default Type
--epochs Training epochs 300 int
--batch_size Batch size 2 int
--steps Steps per epoch 250 int
--lr Learning rate 0.01 float
--lr_decay Learning rate decay 3e-5 float
--iter recurrent iteration 3 int
--train_data Training data path ./data/monuseg/train str
--valid_data Validating data path ./data/monuseg/test str
--valid_dataset Validating dataset type monuseg str
--exp Experiment name(use the same name when testing) 1 str
--evaluate_only If only evaluate using existing model store_true action
--alpha Weight of skip/backward connection 0.4 float

Testing

For MonuSeg and TNBC, you can just use our code to test the model, for example

python main.py --valid_data ./data/tnbc --valid_dataset tnbc --exp your_experiment_id --alpha=0.4 --evaluate_only

For EM, our code can not give the Rand F-score directly, but our code will save the ground truth and result in /checkpoints/your_experiment_id/outputs, you can use the tool ImageJ and code of http://brainiac2.mit.edu/isbi_challenge/evaluation to get Rand F-score.

Acknowledgement

This project would not have been finished without using the codes or files from the following open source projects:

BiONet

Reference

Please cite our work if you find our code/paper is useful to your work.

tbd
Owner
Boheng Cao
SJTU CS
Boheng Cao
Systematic generalisation with group invariant predictions

Requirements are Python 3, TensorFlow v1.14, Numpy, Scipy, Scikit-Learn, Matplotlib, Pillow, Scikit-Image, h5py, tqdm. Experiments were run on V100 GPUs (16 and 32GB).

Faruk Ahmed 30 Dec 01, 2022
VGGVox models for Speaker Identification and Verification trained on the VoxCeleb (1 & 2) datasets

VGGVox models for speaker identification and verification This directory contains code to import and evaluate the speaker identification and verificat

338 Dec 27, 2022
A project which aims to protect your privacy using inexpensive hardware and easily modifiable software

Protecting your privacy using an ESP32, an IR sensor and a python script This project, which I personally call the "never-gonna-catch-me-in-the-act-ev

8 Oct 10, 2022
Generalized Random Forests

generalized random forests A pluggable package for forest-based statistical estimation and inference. GRF currently provides non-parametric methods fo

GRF Labs 781 Dec 25, 2022
DTCN IJCAI - Sequential prediction learning framework and algorithm

DTCN This is the implementation of our paper "Sequential Prediction of Social Me

Bobby 2 Jan 24, 2022
Disagreement-Regularized Imitation Learning

Due to a normalization bug the expert trajectories have lower performance than the rl_baseline_zoo reported experts. Please see the following link in

Kianté Brantley 25 Apr 28, 2022
Pytorch reimplementation of PSM-Net: "Pyramid Stereo Matching Network"

This is a Pytorch Lightning version PSMNet which is based on JiaRenChang/PSMNet. use python main.py to start training. PSM-Net Pytorch reimplementatio

XIAOTIAN LIU 1 Nov 25, 2021
Complete the code of prefix-tuning in low data setting

Prefix Tuning Note: 作者在论文中提到使用真实的word去初始化prefix的操作(Initializing the prefix with activations of real words,significantly improves generation)。我在使用作者提供的

Andrew Zeng 4 Jul 11, 2022
Tensorflow 2 Object Detection API kurulumu, GPU desteği, custom model hazırlama

Tensorflow 2 Object Detection API Bu tutorial, TensorFlow 2.x'in kararlı sürümü olan TensorFlow 2.3'ye yöneliktir. Bu, görüntülerde / videoda nesne a

46 Nov 20, 2022
Official code for "Maximum Likelihood Training of Score-Based Diffusion Models", NeurIPS 2021 (spotlight)

Maximum Likelihood Training of Score-Based Diffusion Models This repo contains the official implementation for the paper Maximum Likelihood Training o

Yang Song 84 Dec 12, 2022
TFOD-MASKRCNN - Tensorflow MaskRCNN With Python

Tensorflow- MaskRCNN Steps git clone https://github.com/amalaj7/TFOD-MASKRCNN.gi

Amal Ajay 2 Jan 18, 2022
Qt-GUI implementation of the YOLOv5 algorithm (ver.6 and ver.5)

YOLOv5-GUI 🎉 YOLOv5算法(ver.6及ver.5)的Qt-GUI实现 🎉 Qt-GUI implementation of the YOLOv5 algorithm (ver.6 and ver.5). 基于YOLOv5的v5版本和v6版本及Javacr大佬的UI逻辑进行编写

EricFang 12 Dec 28, 2022
Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)

Cross-Camera Convolutional Color Constancy, ICCV 2021 (Oral) Mahmoud Afifi1,2, Jonathan T. Barron2, Chloe LeGendre2, Yun-Ta Tsai2, and Francois Bleibe

Mahmoud Afifi 76 Jan 07, 2023
Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.

Skeleton Merger Skeleton Merger, an Unsupervised Aligned Keypoint Detector. The paper is available at https://arxiv.org/abs/2103.10814. A map of the r

北海若 48 Nov 14, 2022
Official code repository for the publication "Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons"

Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons This repository contains the code to repr

Computational Neuroscience, University of Bern 3 Aug 04, 2022
The code for the NeurIPS 2021 paper "A Unified View of cGANs with and without Classifiers".

Energy-based Conditional Generative Adversarial Network (ECGAN) This is the code for the NeurIPS 2021 paper "A Unified View of cGANs with and without

sianchen 22 May 28, 2022
Code to reproduce the results in "Visually Grounded Reasoning across Languages and Cultures", EMNLP 2021.

marvl-code [WIP] This is the implementation of the approaches described in the paper: Fangyu Liu*, Emanuele Bugliarello*, Edoardo M. Ponti, Siva Reddy

25 Nov 15, 2022
Spam your friends and famly and when you do your famly will disown you and you will have no friends.

SpamBot9000 Spam your friends and family and when you do your family will disown you and you will have no friends. Terms of Use Disclaimer: Please onl

DJ15 0 Jun 09, 2022
Multi-modal Content Creation Model Training Infrastructure including the FACT model (AI Choreographer) implementation.

AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ [ICCV-2021]. Overview This package contains the model implementation and training

Google Research 365 Dec 30, 2022
Supplementary code for the AISTATS 2021 paper "Matern Gaussian Processes on Graphs".

Matern Gaussian Processes on Graphs This repo provides an extension for gpflow with Matérn kernels, inducing variables and trainable models implemente

41 Dec 17, 2022