Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Overview

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

This repository is the official PyTorch implementation of Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (arxiv, supp).

🚀 🚀 🚀 News:


Normalizing flows have recently demonstrated promising results for low-level vision tasks. For image super-resolution (SR), it learns to predict diverse photo-realistic high-resolution (HR) images from the low-resolution (LR) image rather than learning a deterministic mapping. For image rescaling, it achieves high accuracy by jointly modelling the downscaling and upscaling processes. While existing approaches employ specialized techniques for these two tasks, we set out to unify them in a single formulation. In this paper, we propose the hierarchical conditional flow (HCFlow) as a unified framework for image SR and image rescaling. More specifically, HCFlow learns a bijective mapping between HR and LR image pairs by modelling the distribution of the LR image and the rest high-frequency component simultaneously. In particular, the high-frequency component is conditional on the LR image in a hierarchical manner. To further enhance the performance, other losses such as perceptual loss and GAN loss are combined with the commonly used negative log-likelihood loss in training. Extensive experiments on general image SR, face image SR and image rescaling have demonstrated that the proposed HCFlow achieves state-of-the-art performance in terms of both quantitative metrics and visual quality.

         

Requirements

  • Python 3.7, PyTorch == 1.7.1
  • Requirements: opencv-python, lpips, natsort, etc.
  • Platforms: Ubuntu 16.04, cuda-11.0
cd HCFlow-master
pip install -r requirements.txt 

Quick Run (takes 1 Minute)

To run the code with one command (without preparing data), run this command:

cd codes
# face image SR
python test_HCFLow.py --opt options/test/test_SR_CelebA_8X_HCFlow.yml

# general image SR
python test_HCFLow.py --opt options/test/test_SR_DF2K_4X_HCFlow.yml

# image rescaling
python test_HCFLow.py --opt options/test/test_Rescaling_DF2K_4X_HCFlow.yml

Data Preparation

The framework of this project is based on MMSR and SRFlow. To prepare data, put training and testing sets in ./datasets as ./datasets/DIV2K/HR/0801.png. Commonly used SR datasets can be downloaded here. There are two ways for accerleration in data loading: First, one can use ./scripts/png2npy.py to generate .npy files and use data/GTLQnpy_dataset.py. Second, one can use .pklv4 dataset (recommended) and use data/LRHR_PKL_dataset.py. Please refer to SRFlow for more details. Prepared datasets can be downloaded here.

Training

To train HCFlow for general image SR/ face image SR/ image rescaling, run this command:

cd codes

# face image SR
python train_HCFLow.py --opt options/train/train_SR_CelebA_8X_HCFlow.yml

# general image SR
python train_HCFLow.py --opt options/train/train_SR_DF2K_4X_HCFlow.yml

# image rescaling
python train_HCFLow.py --opt options/train/train_Rescaling_DF2K_4X_HCFlow.yml

All trained models can be downloaded from here.

Testing

Please follow the Quick Run section. Just modify the dataset path in test_HCFlow_*.yml.

Results

We achieved state-of-the-art performance on general image SR, face image SR and image rescaling.

For more results, please refer to the paper and supp for details.

Citation

@inproceedings{liang21hcflow,
  title={Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling},
  author={Liang, Jingyun and Lugmayr, Andreas and Zhang, Kai and Danelljan, Martin and Van Gool, Luc and Timofte, Radu},
  booktitle={IEEE Conference on International Conference on Computer Vision},
  year={2021}
}

License & Acknowledgement

This project is released under the Apache 2.0 license. The codes are based on MMSR, SRFlow, IRN and Glow-pytorch. Please also follow their licenses. Thanks for their great works.

Comments
  • Testing without GT

    Testing without GT

    Is there a way to run the test without GT? I just want to infer the model. I found a mode called LQ which -I think- should only load the images in LR directory. But this mode gives me the error: assert real_crop * self.opt['scale'] * 2 > self.opt['kernel_size'] TypeError: '>' not supported between instances of 'int' and 'NoneType'

    in LQ_dataset.py", line 88

    solved ✅ 
    opened by AhmedHashish123 4
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @JingyunLiang !👋

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! View it here: https://replicate.ai/jingyunliang/hcflow-sr, which currently supports Image Super-Resolution.

    Claim your page here so you can edit it, and we'll feature it on our website and tweet about it too.

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by chenxwh 2
  • The code implementation and the paper description seem different

    The code implementation and the paper description seem different

    Hi, your work is excellent, but there is one thing I don't understand.

    What is written in the paper is:

    "A diagonal covariance matrix with all diagonal elements close to zero"

    But the code implementation in HCFlowNet_SR_arch.py line 64 is: basic. Gaussian diag.logp (LR, - torch. Ones_ like(lr)*6, fake_ lr_ from_ hr)

    why use - torch. Ones_ like(lr)*6 as covariance matrix? This seems to be inconsistent with the description in the paper

    opened by xmyhhh 2
  • environment

    environment

    ImportError: /home/hbw/gcc-build-5.4.0/lib64/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by /home/hbw/anaconda3/lib/python3.8/site-packages/scipy/fft/_pocketfft/pypocketfft.cpython-38-x86_64-linux-gnu.so)

    Is this error due to my GCC version being too low, and your version is? looking forward to your reply!

    opened by hbw945 2
  • Code versions of BRISQUE and NIQE used in paper

    Code versions of BRISQUE and NIQE used in paper

    Hi, I have run performance tests with the Matlab versions of the NIQE and BRISQUE codes and found deviations from the values reported in the paper. Could you please provide a link to the code you used? thanks a lot~

    solved ✅ 
    opened by xmyhhh 1
  • Update on Replicate demo

    Update on Replicate demo

    Hello again @JingyunLiang :),

    This pull request does a few little things:

    • Updated the demo link with an icon in README as you suggested
    • A bugfix for cleaning temporary directory on cog

    We have added more functionality to the Example page of your model, now you can add and delete to customise the example gallery as you like (as the owner of the page)

    Also, you could run cog push if you like to update the model of any other models on replicate in the future 😄

    opened by chenxwh 1
  • About training and inference time?

    About training and inference time?

    Thanks for your nice work!

    I want to know how much time do you need to train and inference with your models.

    Furthermore, will information about params / FLOPs be reported?

    Thanks.

    solved ✅ 
    opened by TiankaiHang 1
  • RuntimeError: The size of tensor a (20) must match the size of tensor b (40) at non-singleton dimension 3

    RuntimeError: The size of tensor a (20) must match the size of tensor b (40) at non-singleton dimension 3

    Hi, I've encountered the error when I trained the HCFlowNet. I changed my ".png" dataset to ".pklv4" dataset. I was trained on the platform of windows 10 with 1 single GPU. Could you please help me find the error? Thanks a lot.

    opened by William9Baker 0
  • How to build an invertible mapping between two variables whose dimensions are different ?

    How to build an invertible mapping between two variables whose dimensions are different ?

    Maybe this is a stupid question, but I have been puzzled for quite a long time. In the image super-resolution task, the input and output have different dimensions. How to build an invertible mapping between them? I notice that you calculate the determinant of the Jacobian, so I thought the mapping here is strictly invertible?

    opened by Wangbk-dl 0
  • How to make an invertible mapping between two variables whose dimensions are different ?

    How to make an invertible mapping between two variables whose dimensions are different ?

    Maybe this is a stupid question, but I have been puzzled for quite a long time. In the image super-resolution task, the input and output have different dimensions. How to build such an invertible mapping between them ? Take an example: If I have a low-resolution(LR) image x, and I have had an invertible function G. I can feed LR image x into G, and generate an HR image y. But can you ensure that we could obtain an output the same as x when we feed y into G_inverse?

    y = G(x) x' = G_inverse(y) =? x

    I would appreciate it if you could offer some help.

    opened by Wangbk-dl 0
  • New Super-Resolution Benchmarks

    New Super-Resolution Benchmarks

    Hello,

    MSU Graphics & Media Lab Video Group has recently launched two new Super-Resolution Benchmarks.

    If you are interested in participating, you can add your algorithm following the submission steps:

    We would be grateful for your feedback on our work!

    opened by EvgeneyBogatyrev 0
  • Why NLL is negative during the training?

    Why NLL is negative during the training?

    Great work! During the training process, we found that the output NLL is negative. But theoretically, NLL should be positive. Is there any explanation for this?

    opened by IMSEMZPZ 0
Owner
Jingyun Liang
PhD Student at Computer Vision Lab, ETH Zurich
Jingyun Liang
Trax — Deep Learning with Clear Code and Speed

Trax — Deep Learning with Clear Code and Speed Trax is an end-to-end library for deep learning that focuses on clear code and speed. It is actively us

Google 7.3k Dec 26, 2022
Video-based open-world segmentation

UVO_Challenge Team Alpes_runner Solutions This is an official repo for our UVO Challenge solutions for Image/Video-based open-world segmentation. Our

Yuming Du 84 Dec 22, 2022
Implementation of paper "Graph Condensation for Graph Neural Networks"

GCond A PyTorch implementation of paper "Graph Condensation for Graph Neural Networks" Code will be released soon. Stay tuned :) Abstract We propose a

Wei Jin 66 Dec 04, 2022
Easy to use and customizable SOTA Semantic Segmentation models with abundant datasets in PyTorch

Semantic Segmentation Easy to use and customizable SOTA Semantic Segmentation models with abundant datasets in PyTorch Features Applicable to followin

sithu3 530 Jan 05, 2023
This initial strategy was developed specifically for larger pools and is based on taking a moving average and deriving Bollinger Bands to create a projected active liquidity range.

Gamma's Strategy One This initial strategy was developed specifically for larger pools and is based on taking a moving average and deriving Bollinger

Gamma Strategies 46 Dec 02, 2022
Make differentially private training of transformers easy for everyone

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
Message Passing on Cell Complexes

CW Networks This repository contains the code used for the papers Weisfeiler and Lehman Go Cellular: CW Networks (Under review) and Weisfeiler and Leh

Twitter Research 108 Jan 05, 2023
PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features

PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features Overview This repository is the Pytorch implementation of PRIN/SPRIN: On Extracting P

Yang You 17 Mar 02, 2022
Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer"

TSOD Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer" Usage For training, open train_test, run p

Jinming Su 2 Dec 23, 2021
Official PyTorch implementation of the paper: Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting.

Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting Official PyTorch implementation of the paper: Improving Graph Neural Net

Giorgos Bouritsas 58 Dec 31, 2022
Pytorch implementation of the paper "Optimization as a Model for Few-Shot Learning"

Optimization as a Model for Few-Shot Learning This repo provides a Pytorch implementation for the Optimization as a Model for Few-Shot Learning paper.

Albert Berenguel Centeno 238 Jan 04, 2023
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 08, 2022
Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences

Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences 1. Introduction This project is for paper Model-free Vehicle Tracking and St

TuSimple 92 Jan 03, 2023
Classification Modeling: Probability of Default

Credit Risk Modeling in Python Introduction: If you've ever applied for a credit card or loan, you know that financial firms process your information

Aktham Momani 2 Nov 07, 2022
Light-SERNet: A lightweight fully convolutional neural network for speech emotion recognition

Light-SERNet This is the Tensorflow 2.x implementation of our paper "Light-SERNet: A lightweight fully convolutional neural network for speech emotion

Arya Aftab 29 Nov 12, 2022
[Preprint] "Bag of Tricks for Training Deeper Graph Neural Networks A Comprehensive Benchmark Study" by Tianlong Chen*, Kaixiong Zhou*, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, Zhangyang Wang

Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study Codes for [Preprint] Bag of Tricks for Training Deeper Graph

VITA 101 Dec 29, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 09, 2022
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
Pytorch Implementation of Interaction Networks for Learning about Objects, Relations and Physics

Interaction-Network-Pytorch Pytorch Implementraion of Interaction Networks for Learning about Objects, Relations and Physics. Interaction Network is a

117 Nov 05, 2022
The mini-MusicNet dataset

mini-MusicNet A music-domain dataset for multi-label classification Music transcription is sequence-to-sequence prediction problem: given an audio per

John Thickstun 4 Nov 09, 2022