Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Related tags

Deep LearningUID-FDK
Overview

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page

This repository provides the official PyTorch implementation of the following paper:

Unsupervised Image Denoising with Frequency Domain Knowledge

Nahyun Kim* (KAIST), Donggon Jang* (KAIST), Sunhyeok Lee (KAIST), Bomi Kim (KAIST), and Dae-Shik Kim (KAIST) (*The authors have equally contributed.)

BMVC 2021, Accepted as Oral Paper.

Abstract: Supervised learning-based methods yield robust denoising results, yet they are inherently limited by the need for large-scale clean/noisy paired datasets. The use of unsupervised denoisers, on the other hand, necessitates a more detailed understanding of the underlying image statistics. In particular, it is well known that apparent differences between clean and noisy images are most prominent on high-frequency bands, justifying the use of low-pass filters as part of conventional image preprocessing steps. However, most learning-based denoising methods utilize only one-sided information from the spatial domain without considering frequency domain information. To address this limitation, in this study we propose a frequency-sensitive unsupervised denoising method. To this end, a generative adversarial network (GAN) is used as a base structure. Subsequently, we include spectral discriminator and frequency reconstruction loss to transfer frequency knowledge into the generator. Results using natural and synthetic datasets indicate that our unsupervised learning method augmented with frequency information achieves state-of-the-art denoising performance, suggesting that frequency domain information could be a viable factor in improving the overall performance of unsupervised learning-based methods.

Requirements

To install requirements:

conda env create -n [your env name] -f environment.yaml
conda activate [your env name]

To train the model

Synthetic Noise (AWGN)

  1. Download DIV2K dataset for training in here
  2. Randomly split the DIV2K dataset into Clean/Noisy set. Please refer the .txt files in split_data.
  3. Place the splitted dataset(DIV2K_C and DIV2K_N) in ./dataset directory.
dataset
└─── DIV2K_C
└─── DIV2K_N
└─── test
  1. Use gen_dataset_synthetic.py to package dataset in the h5py format.
  2. After that, run this command:
sh ./scripts/train_awgn_sigma15.sh # AWGN with a noise level = 15
sh ./scripts/train_awgn_sigma25.sh # AWGN with a noise level = 25
sh ./scripts/train_awgn_sigma50.sh # AWGN with a noise level = 50
  1. After finishing the training, .pth file is stored in ./exp/[exp_name]/[seed_number]/saved_models/ directory.

Real-World Noise

  1. Download SIDD-Medium Dataset for training in here
  2. Radnomly split the SIDD-Medium Dataset into Clean/Noisy set. Please refer the .txt files in split_data.
  3. Place the splitted dataset(SIDD_C and SIDD_N) in ./dataset directory.
dataset
└─── SIDD_C
└─── SIDD_N
└─── test
  1. Use gen_dataset_real.py to package dataset in the h5py format.
  2. After that, run this command:
sh ./scripts/train_real.sh
  1. After finishing the training, .pth file is stored in ./exp/[exp_name]/[seed_number]/saved_models/ directory.

To evaluate the model

Synthetic Noise (AWGN)

  1. Download CBSD68 dataset for evaluation in here
  2. Place the dataset in ./dataset/test directory.
dataset
└─── train
└─── test
     └─── CBSD68
     └─── SIDD_test
  1. After that, run this command:
sh ./scripts/test_awgn_sigma15.sh # AWGN with a noise level = 15
sh ./scripts/test_awgn_sigma25.sh # AWGN with a noise level = 25
sh ./scripts/test_awgn_sigma50.sh # AWGN with a noise level = 50

Real-World Noise

  1. Download the SIDD test dataset for evaluation in here
  2. Place the dataset in ./dataset/test directory.
dataset
└─── train
└─── test
     └─── CBSD68
     └─── SIDD_test
  1. After that, run this command:
sh ./scripts/test_real.sh

Pre-trained model

We provide pre-trained models in ./checkpoints directory.

checkpoints
|   AWGN_sigma15.pth # pre-trained model (AWGN with a noise level = 15)
|   AWGN_sigma25.pth # pre-trained model (AWGN with a noise level = 25)
|   AWGN_sigma50.pth # pre-trained model (AWGN with a noise level = 50)
|   SIDD.pth # pre-trained model (Real-World noise)

Acknowledgements

This code is built on U-GAT-IT,CARN, SSD-GAN. We thank the authors for sharing their codes.

Contact

If you have any questions, feel free to contact me ([email protected])

Owner
Donggon Jang
Donggon Jang
This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018

Learning-to-See-in-the-Dark This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018, by Chen Chen, Qifeng Chen, Jia Xu, and Vl

5.3k Jan 01, 2023
PyTorch implementation of ICLR 2022 paper PiCO: Contrastive Label Disambiguation for Partial Label Learning

PiCO: Contrastive Label Disambiguation for Partial Label Learning This is a PyTorch implementation of ICLR 2022 Oral paper PiCO; also see our Project

王皓波 147 Jan 07, 2023
Running AlphaFold2 (from ColabFold) in Azure Machine Learning

Running AlphaFold2 (from ColabFold) in Azure Machine Learning Colby T. Ford, Ph.D. Companion repository for Medium Post: How to predict many protein s

Colby T. Ford 3 Feb 18, 2022
Minimal But Practical Image Classifier Pipline Using Pytorch, Finetune on ResNet18, Got 99% Accuracy on Own Small Datasets.

PyTorch Image Classifier Updates As for many users request, I released a new version of standared pytorch immage classification example at here: http:

JinTian 106 Nov 06, 2022
A PyTorch Implementation of Neural IMage Assessment

NIMA: Neural IMage Assessment This is a PyTorch implementation of the paper NIMA: Neural IMage Assessment (accepted at IEEE Transactions on Image Proc

yunxiaos 418 Dec 29, 2022
Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train format

ttopt Description Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train (TT) format and maximu

5 May 23, 2022
10th place solution for Google Smartphone Decimeter Challenge at kaggle.

Under refactoring 10th place solution for Google Smartphone Decimeter Challenge at kaggle. Google Smartphone Decimeter Challenge Global Navigation Sat

12 Oct 25, 2022
The software associated with a paper accepted at EMNLP 2021 titled "Open Knowledge Graphs Canonicalization using Variational Autoencoders".

Open-KG-canonicalization The software associated with a paper accepted at EMNLP 2021 titled "Open Knowledge Graphs Canonicalization using Variational

International Business Machines 13 Nov 11, 2022
Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering

Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering Modular Primitives for High-Performance Differentiable Rendering Samuli

NVIDIA Research Projects 675 Jan 06, 2023
PyTorch Implementation for "ForkGAN with SIngle Rainy NIght Images: Leveraging the RumiGAN to See into the Rainy Night"

ForkGAN with Single Rainy Night Images: Leveraging the RumiGAN to See into the Rainy Night By Seri Lee, Department of Engineering, Seoul National Univ

Seri Lee 52 Oct 12, 2022
Build a small, 3 domain internet using Github pages and Wikipedia and construct a crawler to crawl, render, and index.

TechSEO Crawler Build a small, 3 domain internet using Github pages and Wikipedia and construct a crawler to crawl, render, and index. Play with the r

JR Oakes 57 Nov 24, 2022
Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations.

S2VC Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations. In thi

81 Dec 15, 2022
Implementation of Google Brain's WaveGrad high-fidelity vocoder

WaveGrad Implementation (PyTorch) of Google Brain's high-fidelity WaveGrad vocoder (paper). First implementation on GitHub with high-quality generatio

Ivan Vovk 363 Dec 27, 2022
Code for Active Learning at The ImageNet Scale.

Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.

Zeyad Emam 47 Dec 12, 2022
Codebase for Inducing Causal Structure for Interpretable Neural Networks

Interchange Intervention Training (IIT) Codebase for Inducing Causal Structure for Interpretable Neural Networks Release Notes 12/01/2021: Code and Pa

Zen 6 Oct 10, 2022
DeRF: Decomposed Radiance Fields

DeRF: Decomposed Radiance Fields Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, Andrea Tagliasacchi Links Paper Project Page Abstract

UBC Computer Vision Group 24 Dec 02, 2022
Official Implementation of SWAD (NeurIPS 2021)

SWAD: Domain Generalization by Seeking Flat Minima (NeurIPS'21) Official PyTorch implementation of SWAD: Domain Generalization by Seeking Flat Minima.

Junbum Cha 97 Dec 20, 2022
Companion code for the paper "Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks" by Yatsura et al.

META-RS This is the companion code for the paper "Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks" by Yatsu

Bosch Research 7 Dec 09, 2022
AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition [ArXiv] [Project Page] This repository is the official implementation of AdaMML:

International Business Machines 43 Dec 26, 2022
Lab course materials for IEMBA 8/9 course "Coding and Artificial Intelligence"

IEMBA 8/9 - Coding and Artificial Intelligence Dear IEMBA 8/9 students, welcome to our IEMBA 8/9 elective course Coding and Artificial Intelligence, t

Artificial Intelligence & Machine Learning (AI:ML Lab) @ HSG 1 Jan 11, 2022