Implementation of Common Image Evaluation Metrics by Sayed Nadim (sayednadim.github.io). The repo is built based on full reference image quality metrics such as L1, L2, PSNR, SSIM, LPIPS. and feature-level quality metrics such as FID, IS. It can be used for evaluating image denoising, colorization, inpainting, deraining, dehazing etc. where we have access to ground truth.

Overview

Image Quality Evaluation Metrics

Implementation of some common full reference image quality metrics. The repo is built based on full reference image quality metrics such as L1, L2, PSNR, SSIM, LPIPS. and feature-level quality metrics such as FID, IS. It can be used for evaluating image denoising, colorization, inpainting, deraining, dehazing etc. where we have access to ground truth.

The goal of this repo is to provide a common evaluation script for image evaluation tasks. It contains some commonly used image quality metrics for image evaluation (e.g., L1, L2, SSIM, PSNR, LPIPS, FID, IS).

Pull requests and corrections/suggestions will be cordially appreciated.

Inception Score is not correct. I will check and confirm. Other metrics are ok!

Please Note

  • Images are scaled to [0,1]. If you need to change the data range, please make sure to change the data range in SSIM and PSNR.
  • Number of generated images and ground truth images have to be exactly same.
  • I have resized the images to be (256,256). You can change the resolution based on your needs.
  • Please make sure that all the images (generated and ground_truth images) are in the corresponding folders.

Requirements

How to use

Edit config.yaml as per your need.

  • Run main.py

Usage

  • Options in config.yaml file
    • dataset_name - Name of the dataset (e.g. Places, DIV2K etc. Used for saving dataset name in csv file.). Default
      • Places
    • dataset_with_subfolders - Set to True if your dataset has sub-folders containing images. Default - False
    • multiple_evaluation - Whether you want sequential evaluation ro single evaluation. Please refer to the folder structure for this.
    • dataset_format - Whether you are providing flists or just path to the image folders. Default - image.
    • model_name - Name of the model. Used for saving metrics values in the CSV. Default - Own.
    • generated_image_path - Path to your generated images.
    • ground_truth_image_path - Path to your ground truth images.
    • batch_size - batch size you want to use. Default - 4.
    • image_shape - Shape of the image. Both generated image and ground truth images will be resized to this width. Default - [256, 256, 3].
    • threads - Threads to be used for multi-processing Default - 4.
    • random_crop - If you want random cropped image, instead of resized. Currently not implemented.
    • save_results - If you want to save the results in csv files. Saved to results folder. Default - True.
    • save_type - csv or npz. npz is not implemented yet.

Single or multiple evaluation

        # ================= Single structure ===================#

    |- root
    |   |- image_1
    |   |- image_2
    |   | - .....
    |- gt
    |   |- image_1
    |   |- image_2
    |   | - .....

    For multiple_evaluation, I assumed the file system like this:

        # ================= structure 1 ===================#
    |- root
    |   |- file_10_20
    |        |- image_1
    |        |- image_2
    |        | - .....
    |    |- file_20_30
    |        |- image_1
    |        |- image_2
    |         | - .....
    |- gt
    |   |- image_1
    |   |- image_2
    |   | - .....

    or nested structure like this....

        # ================= structure 2 ===================#

    |- root
    |   |- 01_cond
    |       |- cond_10_20
    |           |- image_1
    |           |- image_2
    |           | - .....
    |   |- 02_cond
    |       |- cond_10_20
    |           |- image_1
    |           |- image_2
    |           | - .....
    |- gt
    |   |- image_1
    |   |- image_2
    |   | - .....

To-do metrics

  • L1
  • L2
  • SSIM
  • PSNR
  • LPIPS
  • FID
  • IS

To-do tasks

  • implementation of the framework
  • primary check for errors
  • Sequential evaluation (i.e. folder1,folder2, folder3... vs ground_truth, useful for denoising, inpainting etc.)
  • unittest

Acknowledgement

Thanks to PhotoSynthesis Team for the wonderful implementation of the metrics. Please cite accordingly if you use PIQ for the evaluation.

Cheers!!

Owner
Sayed Nadim
A string is actually a collection of characters, much like myself.
Sayed Nadim
The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

PRIMER The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization. PRIMER is a pre-trained model for mu

AI2 114 Jan 06, 2023
Implementation of ECCV20 paper: the devil is in classification: a simple framework for long-tail object detection and instance segmentation

Implementation of our ECCV 2020 paper The Devil is in Classification: A Simple Framework for Long-tail Instance Segmentation This repo contains code o

twang 98 Sep 17, 2022
DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021)

DeepLM DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021) Run Please install th

Jingwei Huang 130 Dec 02, 2022
Using Machine Learning to Test Causal Hypotheses in Conjoint Analysis

Readme File for "Using Machine Learning to Test Causal Hypotheses in Conjoint Analysis" by Ham, Imai, and Janson. (2022) All scripts were written and

0 Jan 27, 2022
A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron.

The GatedTabTransformer. A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron. C

Radi Cho 60 Dec 15, 2022
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Yinyu Nie 162 Jan 06, 2023
Training DiffWave using variational method from Variational Diffusion Models.

Variational DiffWave Training DiffWave using variational method from Variational Diffusion Models. Quick Start python train_distributed.py discrete_10

Chin-Yun Yu 26 Dec 13, 2022
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Qing-Long Zhang 199 Jan 08, 2023
A stock generator that assess a list of stocks and returns the best stocks for investing and money allocations based on users choices of volatility, duration and number of stocks

Stock-Generator Please visit "Stock Generator.ipynb" for a clearer view and "Stock Generator.py" for scripts. The stock generator is designed to allow

jmengnyay 1 Aug 02, 2022
Unofficial PyTorch Implementation for HifiFace (https://arxiv.org/abs/2106.09965)

HifiFace — Unofficial Pytorch Implementation Image source: HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping (figure 1, pg. 1)

MINDs Lab 218 Jan 04, 2023
Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Datset)

Graphlevel-SSL Overview Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Dataset). It is unified framework to co

JunSeok 8 Oct 15, 2021
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models Code accompanying CVPR'20 paper of the same title. Paper lin

Alex Damian 7k Dec 30, 2022
An easier way to build neural search on the cloud

An easier way to build neural search on the cloud Jina is a deep learning-powered search framework for building cross-/multi-modal search systems (e.g

Jina AI 17k Jan 02, 2023
CS583: Deep Learning

CS583: Deep Learning

Shusen Wang 2.6k Dec 30, 2022
Applying curriculum to meta-learning for few shot classification

Curriculum Meta-Learning for Few-shot Classification We propose an adaptation of the curriculum training framework, applicable to state-of-the-art met

Stergiadis Manos 3 Oct 25, 2022
Hyperparameter Optimization for TensorFlow, Keras and PyTorch

Hyperparameter Optimization for Keras Talos • Key Features • Examples • Install • Support • Docs • Issues • License • Download Talos radically changes

Autonomio 1.6k Dec 15, 2022
Original Implementation of Prompt Tuning from Lester, et al, 2021

Prompt Tuning This is the code to reproduce the experiments from the EMNLP 2021 paper "The Power of Scale for Parameter-Efficient Prompt Tuning" (Lest

Google Research 282 Dec 28, 2022
《Truly shift-invariant convolutional neural networks》(2021)

Truly shift-invariant convolutional neural networks [Paper] Authors: Anadi Chaman and Ivan Dokmanić Convolutional neural networks were always assumed

Anadi Chaman 46 Dec 19, 2022
All supplementary material used by me while TA-ing CS3244: Machine Learning

CS3244-Tutorial-Material All supplementary material used by me while TA-ing CS3244: Machine Learning at NUS School of Computing. What is this? I teach

Rishabh Anand 18 Sep 23, 2022
Code & Data for the Paper "Time Masking for Temporal Language Models", WSDM 2022

Time Masking for Temporal Language Models This repository provides a reference implementation of the paper: Time Masking for Temporal Language Models

Guy Rosin 12 Jan 06, 2023