This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Overview

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots

Blind2Unblind

Citing Blind2Unblind

@inproceedings{wang2022blind2unblind,
  title={Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots}, 
  author={Zejin Wang and Jiazheng Liu and Guoqing Li and Hua Han},
  booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Installation

The model is built in Python3.8.5, PyTorch 1.7.1 in Ubuntu 18.04 environment.

Data Preparation

1. Prepare Training Dataset

  • For processing ImageNet Validation, please run the command

    python ./dataset_tool.py
  • For processing SIDD Medium Dataset in raw-RGB, please run the command

    python ./dataset_tool_raw.py

2. Prepare Validation Dataset

​ Please put your dataset under the path: ./Blind2Unblind/data/validation.

Pretrained Models

The pre-trained models are placed in the folder: ./Blind2Unblind/pretrained_models

# # For synthetic denoising
# gauss25
./pretrained_models/g25_112f20_beta19.7.pth
# gauss5_50
./pretrained_models/g5-50_112rf20_beta19.4.pth
# poisson30
./pretrained_models/p30_112f20_beta19.1.pth
# poisson5_50
./pretrained_models/p5-50_112rf20_beta20.pth

# # For raw-RGB denoising
./pretrained_models/rawRGB_112rf20_beta19.4.pth

# # For fluorescence microscopy denooising
# Confocal_FISH
./pretrained_models/Confocal_FISH_112rf20_beta20.pth
# Confocal_MICE
./pretrained_models/Confocal_MICE_112rf20_beta19.7.pth
# TwoPhoton_MICE
./pretrained_models/TwoPhoton_MICE_112rf20_beta20.pth

Train

  • Train on synthetic dataset
python train_b2u.py --noisetype gauss25 --data_dir ./data/train/Imagenet_val --val_dirs ./data/validation --save_model_path ../experiments/results --log_name b2u_unet_gauss25_112rf20 --Lambda1 1.0 --Lambda2 2.0 --increase_ratio 20.0
  • Train on SIDD raw-RGB Medium dataset
python train_sidd_b2u.py --data_dir ./data/train/SIDD_Medium_Raw_noisy_sub512 --val_dirs ./data/validation --save_model_path ../experiments/results --log_name b2u_unet_raw_112rf20 --Lambda1 1.0 --Lambda2 2.0 --increase_ratio 20.0
  • Train on FMDD dataset
python train_fmdd_b2u.py --data_dir ./dataset/fmdd_sub/train --val_dirs ./dataset/fmdd_sub/validation --subfold Confocal_FISH --save_model_path ../experiments/fmdd --log_name Confocal_FISH_b2u_unet_fmdd_112rf20 --Lambda1 1.0 --Lambda2 2.0 --increase_ratio 20.0

Test

  • Test on Kodak, BSD300 and Set14

    • For noisetype: gauss25

      python test_b2u.py --noisetype gauss25 --checkpoint ./pretrained_models/g25_112f20_beta19.7.pth --test_dirs ./data/validation --save_test_path ./test --log_name b2u_unet_g25_112rf20 --beta 19.7
    • For noisetype: gauss5_50

      python test_b2u.py --noisetype gauss5_50 --checkpoint ./pretrained_models/g5-50_112rf20_beta19.4.pth --test_dirs ./data/validation --save_test_path ./test --log_name b2u_unet_g5_50_112rf20 --beta 19.4
    • For noisetype: poisson30

      python test_b2u.py --noisetype poisson30 --checkpoint ./pretrained_models/p30_112f20_beta19.1.pth --test_dirs ./data/validation --save_test_path ./test --log_name b2u_unet_p30_112rf20 --beta 19.1
    • For noisetype: poisson5_50

      python test_b2u.py --noisetype poisson5_50 --checkpoint ./pretrained_models/p5-50_112rf20_beta20.pth --test_dirs ./data/validation --save_test_path ./test --log_name b2u_unet_p5_50_112rf20 --beta 20.0
  • Test on SIDD Validation in raw-RGB space

python test_sidd_b2u.py --checkpoint ./pretrained_models/rawRGB_112rf20_beta19.4.pth --test_dirs ./data/validation --save_test_path ./test --log_name validation_b2u_unet_raw_112rf20 --beta 19.4
  • Test on SIDD Benchmark in raw-RGB space
python benchmark_sidd_b2u.py --checkpoint ./pretrained_models/rawRGB_112rf20_beta19.4.pth --test_dirs ./data/validation --save_test_path ./test --log_name benchmark_b2u_unet_raw_112rf20 --beta 19.4
  • Test on FMDD Validation

    • For Confocal_FISH
    python test_fmdd_b2u.py --checkpoint ./pretrained_models/Confocal_FISH_112rf20_beta20.pth --test_dirs ./dataset/fmdd_sub/validation --subfold Confocal_FISH --save_test_path ./test --log_name Confocal_FISH_b2u_unet_fmdd_112rf20 --beta 20.0
    • For Confocal_MICE
    python test_fmdd_b2u.py --checkpoint ./pretrained_models/Confocal_MICE_112rf20_beta19.7.pth --test_dirs ./dataset/fmdd_sub/validation --subfold Confocal_MICE --save_test_path ./test --log_name Confocal_MICE_b2u_unet_fmdd_112rf20 --beta 19.7
    • For TwoPhoton_MICE
    python test_fmdd_b2u.py --checkpoint ./pretrained_models/TwoPhoton_MICE_112rf20_beta20.pth --test_dirs ./dataset/fmdd_sub/validation --subfold TwoPhoton_MICE --save_test_path ./test --log_name TwoPhoton_MICE_b2u_unet_fmdd_112rf20 --beta 20.0
Official implementation of NeuralFusion: Online Depth Map Fusion in Latent Space

NeuralFusion This is the official implementation of NeuralFusion: Online Depth Map Fusion in Latent Space. We provide code to train the proposed pipel

53 Jan 01, 2023
Unsupervised captioning - Code for Unsupervised Image Captioning

Unsupervised Image Captioning by Yang Feng, Lin Ma, Wei Liu, and Jiebo Luo Introduction Most image captioning models are trained using paired image-se

Yang Feng 207 Dec 24, 2022
PyTorch implementation for Graph Contrastive Learning with Augmentations

Graph Contrastive Learning with Augmentations PyTorch implementation for Graph Contrastive Learning with Augmentations [poster] [appendix] Yuning You*

Shen Lab at Texas A&M University 382 Dec 15, 2022
Allows including an action inside another action (by preprocessing the Yaml file). This is how composite actions should have worked.

actions-includes Allows including an action inside another action (by preprocessing the Yaml file). Instead of using uses or run in your action step,

Tim Ansell 70 Nov 04, 2022
Asynchronous Advantage Actor-Critic in PyTorch

Asynchronous Advantage Actor-Critic in PyTorch This is PyTorch implementation of A3C as described in Asynchronous Methods for Deep Reinforcement Learn

Reiji Hatsugai 38 Dec 12, 2022
A time series processing library

Timeseria Timeseria is a time series processing library which aims at making it easy to handle time series data and to build statistical and machine l

Stefano Alberto Russo 11 Aug 08, 2022
Developing your First ML Workflow of the AWS Machine Learning Engineer Nanodegree Program

Exercises and project documentation for the 3. Developing your First ML Workflow of the AWS Machine Learning Engineer Nanodegree Program

Simona Mircheva 1 Jan 13, 2022
Repository for publicly available deep learning models developed in Rosetta community

trRosetta2 This package contains deep learning models and related scripts used by Baker group in CASP14. Installation Linux/Mac clone the package git

81 Dec 29, 2022
Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective

Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective Zhengzhuo Xu, Zenghao Chai, Chun Yuan This is the PyTorch implement

Sincere 16 Dec 15, 2022
pytorch bert intent classification and slot filling

pytorch_bert_intent_classification_and_slot_filling 基于pytorch的中文意图识别和槽位填充 说明 基本思路就是:分类+序列标注(命名实体识别)同时训练。 使用的预训练模型:hugging face上的chinese-bert-wwm-ext 依

西西嘛呦 33 Dec 15, 2022
[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

Xiefan Guo 122 Dec 11, 2022
Generating Videos with Scene Dynamics

Generating Videos with Scene Dynamics This repository contains an implementation of Generating Videos with Scene Dynamics by Carl Vondrick, Hamed Pirs

Carl Vondrick 706 Jan 04, 2023
Establishing Strong Baselines for TripClick Health Retrieval; ECIR 2022

TripClick Baselines with Improved Training Data Welcome 🙌 to the hub-repo of our paper: Establishing Strong Baselines for TripClick Health Retrieval

Sebastian Hofstätter 3 Nov 03, 2022
Framework for training options with different attention mechanism and using them to solve downstream tasks.

Using Attention in HRL Framework for training options with different attention mechanism and using them to solve downstream tasks. Requirements GPU re

5 Nov 03, 2022
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

WenxueCui 7 Nov 07, 2022
Official Codes for Graph Modularity:Towards Understanding the Cross-Layer Transition of Feature Representations in Deep Neural Networks.

Dynamic-Graphs-Construction Official Codes for Graph Modularity:Towards Understanding the Cross-Layer Transition of Feature Representations in Deep Ne

11 Dec 14, 2022
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Kai Zhang 1.2k Dec 29, 2022
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 175 Dec 01, 2022
[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting

[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting [Paper] [Project Website] [Google Colab] We propose a method for converting a

Virginia Tech Vision and Learning Lab 6.2k Jan 01, 2023
Image processing in Python

scikit-image: Image processing in Python Website (including documentation): https://scikit-image.org/ Mailing list: https://mail.python.org/mailman3/l

Image Processing Toolbox for SciPy 5.2k Dec 31, 2022