Offical implementation for "Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation".

Overview

Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation (NeurIPS 2021)

by Qiming Hu, Xiaojie Guo.

Dependencies

  • Python3
  • PyTorch>=1.0
  • OpenCV-Python, TensorboardX, Visdom
  • NVIDIA GPU+CUDA

Network Architecture

figure_arch

🚀 1. Single Image Reflection Separation

Data Preparation

Training dataset

  • 7,643 images from the Pascal VOC dataset, center-cropped as 224 x 224 slices to synthesize training pairs.
  • 90 real-world training pairs provided by Zhang et al.

Tesing dataset

  • 45 real-world testing images from CEILNet dataset.
  • 20 real testing pairs provided by Zhang et al.
  • 454 real testing pairs from SIR^2 dataset, containing three subsets (i.e., Objects (200), Postcard (199), Wild (55)).

Usage

Training

  • For stage 1: python train_sirs.py --inet ytmt_ucs --model ytmt_model_sirs --name ytmt_ucs_sirs --hyper --if_align
  • For stage 2: python train_twostage_sirs.py --inet ytmt_ucs --model twostage_ytmt_model --name ytmt_uct_sirs --hyper --if_align --resume --resume_epoch xx --checkpoints_dir xxx

Testing

python test_sirs.py --inet ytmt_ucs --model twostage_ytmt_model --name ytmt_uct_sirs_test --hyper --if_align --resume --icnn_path ./checkpoints/ytmt_uct_sirs/twostage_unet_68_077_00595364.pt

Trained weights

Google Drive

Visual comparison on real20 and SIR^2

figure_eval

Visual comparison on real45

figure_test

🚀 2. Single Image Denoising

Data Preparation

Training datasets

400 images from the Berkeley segmentation dataset, following DnCNN.

Tesing datasets

BSD68 dataset and Set12.

Usage

Training

python train_denoising.py --inet ytmt_pas --name ytmt_pas_denoising --preprocess True --num_of_layers 9 --mode B --preprocess True

Testing

python test_denoising.py --inet ytmt_pas --name ytmt_pas_denoising_blindtest_25 --test_noiseL 25 --num_of_layers 9 --test_data Set68 --icnn_path ./checkpoints/ytmt_pas_denoising_49_157500.pt

Trained weights

Google Drive

Visual comparison on a sample from BSD68

figure_eval_denoising

🚀 3. Single Image Demoireing

Data Preparation

Training dataset

AIM 2019 Demoireing Challenge

Tesing dataset

100 moireing and clean pairs from AIM 2019 Demoireing Challenge.

Usage

Training

python train_demoire.py --inet ytmt_ucs --model ytmt_model_demoire --name ytmt_uas_demoire --hyper --if_align

Testing

python test_demoire.py --inet ytmt_ucs --model ytmt_model_demoire --name ytmt_uas_demoire_test --hyper --if_align --resume --icnn_path ./checkpoints/ytmt_ucs_demoire/ytmt_ucs_opt_086_00860000.pt

Trained weights

Google Drive

Visual comparison on the validation set of LCDMoire

figure_eval_demoire

You might also like...
Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

Python implementation of cover trees, near-drop-in replacement for scipy.spatial.kdtree

This is a Python implementation of cover trees, a data structure for finding nearest neighbors in a general metric space (e.g., a 3D box with periodic

Home repository for the Regularized Greedy Forest (RGF) library. It includes original implementation from the paper and multithreaded one written in C++, along with various language-specific wrappers.

Regularized Greedy Forest Regularized Greedy Forest (RGF) is a tree ensemble machine learning method described in this paper. RGF can deliver better r

Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow

xRBM Library Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow Installation Using pip: pip install xrbm Examples Tut

A fast Evolution Strategy implementation in Python

Evostra: Evolution Strategy for Python Evolution Strategy (ES) is an optimization technique based on ideas of adaptation and evolution. You can learn

🌳 A Python-inspired implementation of the Optimum-Path Forest classifier.

OPFython: A Python-Inspired Optimum-Path Forest Classifier Welcome to OPFython. Note that this implementation relies purely on the standard LibOPF. Th

Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Official implementation of AAAI-21 paper
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

Comments
  • Datasets

    Datasets

    Hi,

    I have been trying to experiment with the model but I'm having trouble finding the correct datasets for testing. The Sirs2 dataset in the provided link doesn't have the images set up with the naming conventions used in the script. Could you please direct me to the correct data sets for testing and training? Is there a separate repository that you have used?

    Thanks so much,

    David

    opened by davidgaddie 3
  • About Training Details

    About Training Details

    Hello, thank you for sharing your wonderful work. I have some question about the triaining details. It says the training epoch is 120 in your paper but the epoch is set to 60 in YTMT-Strategy/options/net_options/train_options.py. Moreover, the best model in your paper is YTMT-UCT which need two stages training. Can you provide the training settings of the YTMT-UCT (epoch, batchsize...)? Look forward to your reply!

    opened by DUT-CSJ 2
  • CUDA vram allocation issue

    CUDA vram allocation issue

    Hi,

    I've been trying to run the reflection test code, but I get this error: RuntimeError: CUDA out of memory. Tried to allocate 15.66 GiB (GPU 0; 22.20 GiB total capacity; 16.09 GiB already allocated; 2.68 GiB free; 17.55 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

    I'm running on an A10G GPU on AWS. I suspect that maybe the dataset is incorrect as each image in the dataset I have is around 800MB. If that's the case can I please be directed to the correct repository for the read20_420 images?

    Thanks so much,

    David

    opened by davidgaddie 1
  • test demoire error

    test demoire error

    Thanks for your great work ,but some error when I run: python test_demoire.py --inet ytmt_ucs --model ytmt_model_demoire --name ytmt_uas_demoire_test --hyper --if_align --resume --icnn_path checkpoints/ytmt_ucs_demoire/ytmt_ucs_demoire_opt_086_00860000.pt

    -------------- End ---------------- [i] initialization method [edsr] Traceback (most recent call last): File "test_demoire.py", line 28, in engine = Engine(opt) File "/nfs_data/code/YTMT-Strategy-main/engine.py", line 19, in init self.__setup() File "/nfs_data/code/YTMT-Strategy-main/engine.py", line 29, in __setup self.model.initialize(opt) File "/nfs_data/code/YTMT-Strategy-main/models/ytmt_model_demoire.py", line 242, in initialize self.load(self, opt.resume_epoch) File "/nfs_data/code/YTMT-Strategy-main/models/ytmt_model_demoire.py", line 413, in load model.net_i.load_state_dict(state_dict['icnn']) File "/opt/conda/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1223, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for YTMT_US: Missing key(s) in state_dict: "inc.ytmt_head.fusion_l.weight", "inc.ytmt_head.fusion_l.bias", "inc.ytmt_head.fusion_r.weight", "inc.ytmt_head.fusion_r.bias", "down1.model.ytmt_head.fusion_l.weight", "down1.model.ytmt_head.fusion_l.bias", "down1.model.ytmt_head.fusion_r.weight", "down1.model.ytmt_head.fusion_r.bias", "down2.model.ytmt_head.fusion_l.weight", "down2.model.ytmt_head.fusion_l.bias", "down2.model.ytmt_head.fusion_r.weight", "down2.model.ytmt_head.fusion_r.bias", "down3.model.ytmt_head.fusion_l.weight", "down3.model.ytmt_head.fusion_l.bias", "down3.model.ytmt_head.fusion_r.weight", "down3.model.ytmt_head.fusion_r.bias", "down4.model.ytmt_head.fusion_l.weight", "down4.model.ytmt_head.fusion_l.bias", "down4.model.ytmt_head.fusion_r.weight", "down4.model.ytmt_head.fusion_r.bias", "up1.model.ytmt_head.fusion_l.weight", "up1.model.ytmt_head.fusion_l.bias", "up1.model.ytmt_head.fusion_r.weight", "up1.model.ytmt_head.fusion_r.bias", "up2.model.ytmt_head.fusion_l.weight", "up2.model.ytmt_head.fusion_l.bias", "up2.model.ytmt_head.fusion_r.weight", "up2.model.ytmt_head.fusion_r.bias", "up3.model.ytmt_head.fusion_l.weight", "up3.model.ytmt_head.fusion_l.bias", "up3.model.ytmt_head.fusion_r.weight", "up3.model.ytmt_head.fusion_r.bias", "up4.model.ytmt_head.fusion_l.weight", "up4.model.ytmt_head.fusion_l.bias", "up4.model.ytmt_head.fusion_r.weight", "up4.model.ytmt_head.fusion_r.bias".

    opened by zdyshine 1
Owner
Qiming Hu
Qiming Hu
ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs

ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs This is the code of paper ConE: Cone Embeddings for Multi-Hop Reasoning over Knowl

MIRA Lab 33 Dec 07, 2022
Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

William Falcon 141 Dec 30, 2022
Easy and comprehensive assessment of predictive power, with support for neuroimaging features

Documentation: https://raamana.github.io/neuropredict/ News As of v0.6, neuropredict now supports regression applications i.e. predicting continuous t

Pradeep Reddy Raamana 93 Nov 29, 2022
Using LSTM write Tang poetry

本教程将通过一个示例对LSTM进行介绍。通过搭建训练LSTM网络,我们将训练一个模型来生成唐诗。本文将对该实现进行详尽的解释,并阐明此模型的工作方式和原因。并不需要过多专业知识,但是可能需要新手花一些时间来理解的模型训练的实际情况。为了节省时间,请尽量选择GPU进行训练。

56 Dec 15, 2022
Jittor implementation of PCT:Point Cloud Transformer

PCT: Point Cloud Transformer This is a Jittor implementation of PCT: Point Cloud Transformer.

MenghaoGuo 547 Jan 03, 2023
Code & Models for Temporal Segment Networks (TSN) in ECCV 2016

Temporal Segment Networks (TSN) We have released MMAction, a full-fledged action understanding toolbox based on PyTorch. It includes implementation fo

1.4k Jan 01, 2023
MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images

Main repo for ECCV 2020 paper MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. visual.cs.brown.edu/matryodshka

Brown University Visual Computing Group 75 Dec 13, 2022
Code implementation from my Medium blog post: [Transformers from Scratch in PyTorch]

transformer-from-scratch Code for my Medium blog post: Transformers from Scratch in PyTorch Note: This Transformer code does not include masked attent

Frank Odom 27 Dec 21, 2022
DiffWave is a fast, high-quality neural vocoder and waveform synthesizer.

DiffWave DiffWave is a fast, high-quality neural vocoder and waveform synthesizer. It starts with Gaussian noise and converts it into speech via itera

LMNT 498 Jan 03, 2023
Inference pipeline for our participation in the FeTA challenge 2021.

feta-inference Inference pipeline for our participation in the FeTA challenge 2021. Team name: TRABIT Installation Download the two folders in https:/

Lucas Fidon 2 Apr 13, 2022
Codes and pretrained weights for winning submission of 2021 Brain Tumor Segmentation (BraTS) Challenge

Winning submission to the 2021 Brain Tumor Segmentation Challenge This repo contains the codes and pretrained weights for the winning submission to th

94 Dec 28, 2022
Team nan solution repository for FPT data-centric competition. Data augmentation, Albumentation, Mosaic, Visualization, KNN application

FPT_data_centric_competition - Team nan solution repository for FPT data-centric competition. Data augmentation, Albumentation, Mosaic, Visualization, KNN application

Pham Viet Hoang (Harry) 2 Oct 30, 2022
A forwarding MPI implementation that can use any other MPI implementation via an MPI ABI

MPItrampoline MPI wrapper library: MPI trampoline library: MPI integration tests: MPI is the de-facto standard for inter-node communication on HPC sys

Erik Schnetter 31 Dec 22, 2022
TraSw for FairMOT - A Single-Target Attack example (Attack ID: 19; Screener ID: 24):

TraSw for FairMOT A Single-Target Attack example (Attack ID: 19; Screener ID: 24): Fig.1 Original Fig.2 Attacked By perturbing only two frames in this

Derry Lin 21 Dec 21, 2022
HODEmu, is both an executable and a python library that is based on Ragagnin 2021 in prep.

HODEmu HODEmu, is both an executable and a python library that is based on Ragagnin 2021 in prep. and emulates satellite abundance as a function of co

Antonio Ragagnin 1 Oct 13, 2021
To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

Kunal Wadhwa 2 Jan 05, 2022
Numba-accelerated Pythonic implementation of MPDATA with examples in Python, Julia and Matlab

PyMPDATA PyMPDATA is a high-performance Numba-accelerated Pythonic implementation of the MPDATA algorithm of Smolarkiewicz et al. used in geophysical

Atmospheric Cloud Simulation Group @ Jagiellonian University 15 Nov 23, 2022
Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization

Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization This repository contains the code for the BBI optimizer, introduced in the p

G. Bruno De Luca 5 Sep 06, 2022
This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack".

Generative Dynamic Patch Attack This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack". Requirements PyTo

Xiang Li 8 Nov 17, 2022
Code for Universal Semi-Supervised Semantic Segmentation models paper accepted in ICCV 2019

USSS_ICCV19 Code for Universal Semi Supervised Semantic Segmentation accepted to ICCV 2019. Full Paper available at https://arxiv.org/abs/1811.10323.

Tarun K 68 Nov 24, 2022