A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising (CVPR 2020 Oral & TPAMI 2021)

Related tags

Deep LearningELD
Overview

ELD

The implementation of CVPR 2020 (Oral) paper "A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising" and its journal (TPAMI) version "Physics-based Noise Modeling for Extreme Low-light Photography". Interested readers are also referred to an insightful Note about this work in Zhihu (Chinese).

News

  • 2022/01/08: Major Update: Release the training code and other related items (including synthetic datasets, customized rawpy, calibrated camera noise parameters, baseline noise models, calibrated SonyA7S2 camera response function (CRF) and a modern implementation of EMoR radiometric calibration method) to accelerate further research!
  • 2022/01/05: Replace the released ELD dataset by my local version of the dataset. We thank @fenghansen for pointing this out. Please refer to this issue for more details.
  • 2021/08/05: The comprehensive version of this work was accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
  • 2020/07/16: Release the ELD dataset and our pretrained models at GoogleDrive and Baidudisk (0lby)

Highlights

  • We present a highly accurate noise formation model based on the characteristics of CMOS photosensors, thereby enabling us to synthesize realistic samples that better match the physics of image formation process.

  • To study the generalizability of a neural network trained with existing schemes, we introduce a new Extreme Low-light Denoising (ELD) dataset that covers four representative modern camera devices for evaluation purposes only. The image capture setup and example images are shown as below:

  • By training only with our synthetic data, we demonstrate a convolutional neural network can compete with or sometimes even outperform the network trained with paired real data under extreme low-light settings. The denoising results of networks trained with multiple schemes, i.e. 1) synthetic data generated by the poissonian-gaussian noise model, 2) paired read data of SID dataset and 3) synthetic data generated by our proposed noise model, are displayed as follows:

Prerequisites

  • Python >=3.6, PyTorch >= 1.6
  • Requirements: opencv-python, tensorboardX, lmdb, rawpy, torchinterp1d
  • Platforms: Ubuntu 16.04, cuda-10.1

Notice this codebase relies on my own customized rawpy, which provides more functionalities than the official one. This is released together with our datasets and the pretrained models. To build rawpy from source, please first compile and install the LibRaw library following the official instructions, then type pip install -e . in the rawpy directory.

Quick Start

Due to the business license, we are unable to to provide the noise model as well as the calibration method. Instead, we release our collected ELD dataset and our pretrained models to facilitate future research.

To reproduce our results presented in the paper (Table 1 and 2), please take a look at scripts/test_SID.sh and scripts/test_ELD.sh

Update: (2022-01-08) We release the training code and the synthetic datasets per the users' requests. The training scripts and the user instructions can be found in scripts/train.sh. Additionally, we provide the baseline noise models (G/G+P/G+P*) and the calibrated noise parameters for all cameras of ELD for training (see noise.py and train_syn.py), which could serve as a starting point to develop your own noise model.

We use lmdb to prepare datasets, please refer to util/lmdb_data.py to see how we generate datasets from SID. We also provide a new implementation of a classic radiometric calibration method EMoR, and utilize it to calibrate the CRF of SonyA7S2, which could be further used to simulate realistic on-board ISP as in the commercial SonyA7S2 camera.

ELD Dataset

The dataset capture protocol is shown as follow:

We choose three ISO settings (800, 1600, 3200) and four low light factors (x1, x10, x100, x200) to capture the dataset (x1/x10 is not used in our paper). Image ids 1, 6, 11, 16 represent the long-exposure reference images. Please refer to ELDEvalDataset class in data/sid_dataset.py for more details.

Citation

If you find our code helpful in your research or work please cite our paper.

@inproceedings{wei2020physics,
  title={A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising},
  author={Wei, Kaixuan and Fu, Ying and Yang, Jiaolong and Huang, Hua},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020},
}

@article{wei2021physics,
  title={Physics-based Noise Modeling for Extreme Low-light Photography},
  author={Wei, Kaixuan and Fu, Ying and Zheng, Yinqiang and Yang, Jiaolong},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
  publisher={IEEE}
}

Contact

If you find any problem, please feel free to contact me (kxwei at princeton.edu kaixuan_wei at bit.edu.cn). A brief self-introduction (including your name, affiliation and position) is required, if you would like to get an in-depth help from me. I'd be glad to talk with you if more information (e.g. your personal website link) is attached. Note I would not reply to any impolite/aggressive email that violates the above criteria.

Owner
Kaixuan Wei
PhD student at Princeton University. Previously I obtained BS and MS degrees from BIT and ever did research at Cambridge and MSRA.
Kaixuan Wei
Contains code for Deep Kernelized Dense Geometric Matching

DKM - Deep Kernelized Dense Geometric Matching Contains code for Deep Kernelized Dense Geometric Matching We provide pretrained models and code for ev

Johan Edstedt 83 Dec 23, 2022
This is an official implementation of the High-Resolution Transformer for Dense Prediction.

High-Resolution Transformer for Dense Prediction Introduction This is the official implementation of High-Resolution Transformer (HRT). We present a H

HRNet 403 Dec 13, 2022
A chemical analysis of lipophilicities & molecule drawings including ML

A chemical analysis of lipophilicity & molecule drawings including a bit of ML analysis. This is a simple project that includes two Jupyter files (one

Aurimas A. Nausėdas 7 Nov 22, 2022
A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python.

c is for Camera A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python. The purpose of this project is to explore and underst

Daniele Procida 146 Sep 26, 2022
Codebase for Attentive Neural Hawkes Process (A-NHP) and Attentive Neural Datalog Through Time (A-NDTT)

Introduction Codebase for the paper Transformer Embeddings of Irregularly Spaced Events and Their Participants. This codebase contains two packages: a

Alan Yang 28 Dec 12, 2022
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".

Deep Exemplar-based Video Colorization (Pytorch Implementation) Paper | Pretrained Model | Youtube video 🔥 | Colab demo Deep Exemplar-based Video Col

Bo Zhang 253 Dec 27, 2022
Code and data for ImageCoDe, a contextual vison-and-language benchmark

ImageCoDe This repository contains code and data for ImageCoDe: Image Retrieval from Contextual Descriptions. Data All collected descriptions for the

McGill NLP 27 Dec 02, 2022
Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal"

Patch-wise Adversarial Removal Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal

4 Oct 12, 2022
The UI as a mobile display for OP25

OP25 Mobile Control Head A 'remote' control head that interfaces with an OP25 instance. We take advantage of some data end-points left exposed for the

Sarah Rose Giddings 13 Dec 28, 2022
Dynamic hair modeling from monocular videos using deep neural networks

Dynamic Hair Modeling The source code of the networks for our paper "Dynamic hair modeling from monocular videos using deep neural networks" (SIGGRAPH

53 Oct 18, 2022
Gray Zone Assessment

Gray Zone Assessment Get started Clone github repository git clone https://github.com/andreanne-lemay/gray_zone_assessment.git Build docker image dock

1 Jan 08, 2022
Optimized code based on M2 for faster image captioning training

Transformer Captioning This repository contains the code for Transformer-based image captioning. Based on meshed-memory-transformer, we further optimi

lyricpoem 16 Dec 16, 2022
Code to go with the paper "Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo"

dblmahmc Code to go with the paper "Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo" Requirements: https://github.com

1 Dec 17, 2021
CUAD

Contract Understanding Atticus Dataset This repository contains code for the Contract Understanding Atticus Dataset (CUAD), a dataset for legal contra

The Atticus Project 273 Dec 17, 2022
Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Ibai Gorordo 42 Oct 07, 2022
Pairwise Learning for Neural Link Prediction for OGB (PLNLP-OGB)

Pairwise Learning for Neural Link Prediction for OGB (PLNLP-OGB) This repository provides evaluation codes of PLNLP for OGB link property prediction t

Zhitao WANG 31 Oct 10, 2022
The pytorch implementation of the paper "text-guided neural image inpainting" at MM'2020

TDANet: Text-Guided Neural Image Inpainting, MM'2020 (Oral) MM | ArXiv This repository implements the paper "Text-Guided Neural Image Inpainting" by L

LisaiZhang 75 Dec 22, 2022
Barlow Twins and HSIC

Barlow Twins and HSIC Unofficial Pytorch implementation for Barlow Twins and HSIC_SSL on small datasets (CIFAR10, STL10, and Tiny ImageNet). Correspon

Yao-Hung Hubert Tsai 49 Nov 24, 2022
Official implementation of paper Gradient Matching for Domain Generalization

Gradient Matching for Domain Generalisation This is the official PyTorch implementation of Gradient Matching for Domain Generalisation. In our paper,

94 Dec 23, 2022