FID calculation with proper image resizing and quantization steps

Overview

clean-fid: Fixing Inconsistencies in FID


Project | Paper

The FID calculation involves many steps that can produce inconsistencies in the final metric. As shown below, different implementations use different low-level image quantization and resizing functions, the latter of which are often implemented incorrectly.

We provide an easy-to-use library to address the above issues and make the FID scores comparable across different methods, papers, and groups.

FID Steps


On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation
Gaurav Parmar, Richard Zhang, Jun-Yan Zhu
arXiv 2104.11222, 2021
CMU and Adobe



Buggy Resizing Operations

The definitions of resizing functions are mathematical and should never be a function of the library being used. Unfortunately, implementations differ across commonly-used libraries. They are often implemented incorrectly by popular libraries.


The inconsistencies among implementations can have a drastic effect of the evaluations metrics. The table below shows that FFHQ dataset images resized with bicubic implementation from other libraries (OpenCV, PyTorch, TensorFlow, OpenCV) have a large FID score (≥ 6) when compared to the same images resized with the correctly implemented PIL-bicubic filter. Other correctly implemented filters from PIL (Lanczos, bilinear, box) all result in relatively smaller FID score (≤ 0.75).

JPEG Image Compression

Image compression can have a surprisingly large effect on FID. Images are perceptually indistinguishable from each other but have a large FID score. The FID scores under the images are calculated between all FFHQ images saved using the corresponding JPEG format and the PNG format.

Below, we study the effect of JPEG compression for StyleGAN2 models trained on the FFHQ dataset (left) and LSUN outdoor Church dataset (right). Note that LSUN dataset images were collected with JPEG compression (quality 75), whereas FFHQ images were collected as PNG. Interestingly, for LSUN dataset, the best FID score (3.48) is obtained when the generated images are compressed with JPEG quality 87.


Quick Start

  • install requirements

    pip install -r requirements.txt
    
  • install the library

    pip install clean-fid
    
  • Compute FID between two image folders

    from cleanfid import fid
    
    score = fid.compute_fid(fdir1, fdir2)
    
  • Compute FID between one folder of images and pre-computed datasets statistics (e.g., FFHQ)

    from cleanfid import fid
    
    score = fid.compute_fid(fdir1, dataset_name="FFHQ", dataset_res=1024)
    
    
  • Compute FID using a generative model and pre-computed dataset statistics:

    from cleanfid import fid
    
    # function that accepts a latent and returns an image in range[0,255]
    gen = lambda z: GAN(latent=z, ... , <other_flags>)
    
    score = fid.compute_fid(gen=gen, dataset_name="FFHQ",
            dataset_res=256, num_gen=50_000)
    
    

Supported Precomputed Datasets

We provide precompute statistics for the following configurations

Task Dataset Resolution split mode
Image Generation FFHQ 256,1024 train+val clean, legacy_pytorch, legacy_tensorflow
Image Generation LSUN Outdoor Churches 256 train clean, legacy_pytorch, legacy_tensorflow
Image to Image horse2zebra 128,256 train, test, train+test clean, legacy_pytorch, legacy_tensorflow

Using precomputed statistics In order to compute the FID score with the precomputed dataset statistics, use the corresponding options. For instance, to compute the clean-fid score on generated 256x256 FFHQ images use the command:

fid_score = fid.compute_fid(fdir1, dataset_name="FFHQ", dataset_res=256,  mode="clean")

Create Custom Dataset Statistics

  • dataset_path: folder where the dataset images are stored
  • Generate and save the inception statistics
    import numpy as np
    from cleanfid import fid
    dataset_path = ...
    feat = fid.get_folder_features(dataset_path, num=50_000)
    mu = np.mean(feats, axis=0)
    sigma = np.cov(feats, rowvar=False)
    np.savez_compressed("stats.npz", mu=mu, sigma=sigma)
    

Backwards Compatibility

We provide two flags to reproduce the legacy FID score.

  • mode="legacy_pytorch"
    This flag is equivalent to using the popular PyTorch FID implementation provided here
    The difference between using clean-fid with this option and code is ~1.9e-06
    See doc for how the methods are compared

  • mode="legacy_tensorflow"
    This flag is equivalent to using the official implementation of FID released by the authors. To use this flag, you need to additionally install tensorflow. The tensorflow cuda version may cause issues with the pytorch code. I have tested this with TensorFlow-cpu 2.2 (`pip install tensorflow-cpu==2.2)


CleanFID Leaderboard for common tasks


FFHQ @ 1024x1024

Model Legacy-FID Clean-FID
StyleGAN2 2.85 ± 0.05 3.08 ± 0.05
StyleGAN 4.44 ± 0.04 4.82 ± 0.04
MSG-GAN 6.09 ± 0.04 6.58 ± 0.06

Image-to-Image (horse->zebra @ 256x256) Computed using test images

Model Legacy-FID Clean-FID
CycleGAN 77.20 75.17
CUT 45.51 43.71

Building from source

python setup.py bdist_wheel
pip install dist/*

Citation

If you find this repository useful for your research, please cite the following work.

@article{parmar2021cleanfid,
  title={On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation},
  author={Parmar, Gaurav and Zhang, Richard and Zhu, Jun-Yan},
  journal={arXiv preprint arXiv:2104.11222},
  year={2021}
}

Credits

PyTorch-StyleGAN2: code | License

PyTorch-FID: code | License

StyleGAN2: code | LICENSE

converted FFHQ weights: code | License

neural image generation

pixray Pixray is an image generation system. It combines previous ideas including: Perception Engines which uses image augmentation and iteratively op

dribnet 398 Dec 17, 2022
This is the official implementation of TrivialAugment and a mini-library for the application of multiple image augmentation strategies including RandAugment and TrivialAugment.

Trivial Augment This is the official implementation of TrivialAugment (https://arxiv.org/abs/2103.10158), as was used for the paper. TrivialAugment is

AutoML-Freiburg-Hannover 94 Dec 30, 2022
Use evolutionary algorithms instead of gridsearch in scikit-learn

sklearn-deap Use evolutionary algorithms instead of gridsearch in scikit-learn. This allows you to reduce the time required to find the best parameter

rsteca 709 Jan 03, 2023
A general 3D Object Detection codebase in PyTorch.

Det3D is the first 3D Object Detection toolbox which provides off the box implementations of many 3D object detection algorithms such as PointPillars, SECOND, PIXOR, etc, as well as state-of-the-art

Benjin Zhu 1.4k Jan 05, 2023
Pytorch Implementation of Spiking Neural Networks Calibration, ICML 2021

SNN_Calibration Pytorch Implementation of Spiking Neural Networks Calibration, ICML 2021 Feature Comparison of SNN calibration: Features SNN Direct Tr

Yuhang Li 60 Dec 27, 2022
Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper General tensor network decoding of 2D Pauli codes

Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper General tensor network decoding of 2D Pauli codes

Christopher T. Chubb 35 Dec 21, 2022
Poplar implementation of "Bundle Adjustment on a Graph Processor" (CVPR 2020)

Poplar Implementation of Bundle Adjustment using Gaussian Belief Propagation on Graphcore's IPU Implementation of CVPR 2020 paper: Bundle Adjustment o

Joe Ortiz 34 Dec 05, 2022
The repository contains source code and models to use PixelNet architecture used for various pixel-level tasks. More details can be accessed at .

PixelNet: Representation of the pixels, by the pixels, and for the pixels. We explore design principles for general pixel-level prediction problems, f

Aayush Bansal 196 Aug 10, 2022
🛠️ Tools for Transformers compression using Lightning ⚡

Bert-squeeze is a repository aiming to provide code to reduce the size of Transformer-based models or decrease their latency at inference time.

Jules Belveze 66 Dec 11, 2022
Official implementation of the Neurips 2021 paper Searching Parameterized AP Loss for Object Detection.

Parameterized AP Loss By Chenxin Tao, Zizhang Li, Xizhou Zhu, Gao Huang, Yong Liu, Jifeng Dai This is the official implementation of the Neurips 2021

46 Jul 06, 2022
The official PyTorch code for NeurIPS 2021 ML4AD Paper, "Does Thermal data make the detection systems more reliable?"

MultiModal-Collaborative (MMC) Learning Framework for integrating RGB and Thermal spectral modalities This is the official code for NeurIPS 2021 Machi

NeurAI 12 Nov 02, 2022
A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

Segnet is deep fully convolutional neural network architecture for semantic pixel-wise segmentation. This is implementation of http://arxiv.org/pdf/15

Pradyumna Reddy Chinthala 190 Dec 15, 2022
A python program to hack instagram

hackinsta a program to hack instagram Yokoback_(instahack) is the file to open, you need libraries write on import. You run that file in the same fold

2 Jan 22, 2022
Deep Multi-Magnification Network for multi-class tissue segmentation of whole slide images

Deep Multi-Magnification Network This repository provides training and inference codes for Deep Multi-Magnification Network published here. Deep Multi

Computational Pathology 12 Aug 06, 2022
Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

What is judgyprophet? judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the

AstraZeneca 56 Oct 26, 2022
Official pytorch implementation of DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces

DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces Minhyuk Sung*, Zhenyu Jiang*, Panos Achlioptas, Niloy J. Mitra, Leonidas

Zhenyu Jiang 21 Aug 30, 2022
Space Ship Simulator using python

FlyOver Basic space-ship simulator using python How to run? Just double click run.py What modules do i need? All modules that i currently using is bui

0 Oct 09, 2022
PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners for self-supervised ViT.

MAE for Self-supervised ViT Introduction This is an unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners for self-sup

36 Oct 30, 2022
Demystifying How Self-Supervised Features Improve Training from Noisy Labels

Demystifying How Self-Supervised Features Improve Training from Noisy Labels This code is a PyTorch implementation of the paper "[Demystifying How Sel

<a href=[email protected]"> 4 Oct 14, 2022
PyTorch implementation of spectral graph ConvNets, NIPS’16

Graph ConvNets in PyTorch October 15, 2017 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbresson

Xavier Bresson 287 Jan 04, 2023