Compute FID scores with PyTorch.

Overview

PyPI

FID score for PyTorch

This is a port of the official implementation of Fréchet Inception Distance to PyTorch. See https://github.com/bioinf-jku/TTUR for the original implementation using Tensorflow.

FID is a measure of similarity between two datasets of images. It was shown to correlate well with human judgement of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network.

Further insights and an independent evaluation of the FID score can be found in Are GANs Created Equal? A Large-Scale Study.

The weights and the model are exactly the same as in the official Tensorflow implementation, and were tested to give very similar results (e.g. .08 absolute error and 0.0009 relative error on LSUN, using ProGAN generated images). However, due to differences in the image interpolation implementation and library backends, FID results still differ slightly from the original implementation. So if you report FID scores in your paper, and you want them to be exactly comparable to FID scores reported in other papers, you should consider using the official Tensorflow implementation.

Installation

Install from pip:

pip install pytorch-fid

Requirements:

  • python3
  • pytorch
  • torchvision
  • pillow
  • numpy
  • scipy

Usage

To compute the FID score between two datasets, where images of each dataset are contained in an individual folder:

python -m pytorch_fid path/to/dataset1 path/to/dataset2

To run the evaluation on GPU, use the flag --device cuda:N, where N is the index of the GPU to use.

Using different layers for feature maps

In difference to the official implementation, you can choose to use a different feature layer of the Inception network instead of the default pool3 layer. As the lower layer features still have spatial extent, the features are first global average pooled to a vector before estimating mean and covariance.

This might be useful if the datasets you want to compare have less than the otherwise required 2048 images. Note that this changes the magnitude of the FID score and you can not compare them against scores calculated on another dimensionality. The resulting scores might also no longer correlate with visual quality.

You can select the dimensionality of features to use with the flag --dims N, where N is the dimensionality of features. The choices are:

  • 64: first max pooling features
  • 192: second max pooling features
  • 768: pre-aux classifier features
  • 2048: final average pooling features (this is the default)

Citing

If you use this repository in your research, consider citing it using the following Bibtex entry:

@misc{Seitzer2020FID,
  author={Maximilian Seitzer},
  title={{pytorch-fid: FID Score for PyTorch}},
  month={August},
  year={2020},
  note={Version 0.1.1},
  howpublished={\url{https://github.com/mseitzer/pytorch-fid}},
}

License

This implementation is licensed under the Apache License 2.0.

FID was introduced by Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler and Sepp Hochreiter in "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium", see https://arxiv.org/abs/1706.08500

The original implementation is by the Institute of Bioinformatics, JKU Linz, licensed under the Apache License 2.0. See https://github.com/bioinf-jku/TTUR.

Comments
  • time-consuming of the FID computation

    time-consuming of the FID computation

    time-consuming of the FID computation Hi, I want to know why the FID computation is very slow. When I calculate the FID of 28000 images, it sometimes got stuck and spent almost one day or more to calculate once! Is there any idea to help me fix this problem? Thanks!

    opened by LonglongaaaGo 11
  • New weight still produce wrong result

    New weight still produce wrong result

    Using the updated weights still get wrong result. See this repo https://github.com/AtlantixJJ/PytorchInceptionV3 for detail.

    Run this code (need to store some image in data/cifar10_test, or go to the repo above):

    """
    A script to test Pytorch and Tensorflow InceptionV3 have consistent behavior.
    """
    import sys, argparse, os, pathlib
    sys.path.insert(0, ".")
    import numpy as np
    import tensorflow as tf
    import torch
    from inception_modified import inception_v3
    from PIL import Image
    
    parser = argparse.ArgumentParser()
    parser.add_argument("--load_path", default="", help="The path to changed pytorch inceptionv3 weight. Run change_statedict.py to obtain.")
    args = parser.parse_args()
    
    def check_or_download_inception(inception_path):
        ''' Checks if the path to the inception file is valid, or downloads
            the file if it is not present. '''
        INCEPTION_URL = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'
        if inception_path is None:
            inception_path = '/tmp'
        inception_path = pathlib.Path(inception_path)
        model_file = inception_path / 'classify_image_graph_def.pb'
        if not model_file.exists():
            print("Downloading Inception model")
            from urllib import request
            import tarfile
            fn, _ = request.urlretrieve(INCEPTION_URL)
            with tarfile.open(fn, mode='r') as f:
                f.extract('classify_image_graph_def.pb', str(model_file.parent))
        return str(model_file)
    
    def torch2numpy(x):
        return x.detach().cpu().numpy().transpose(0, 2, 3, 1)
    
    torch.backends.cudnn.benchmark = True
    torch.manual_seed(1)
    torch.cuda.manual_seed(1)
    
    data_dir = "data/cifar10_test/"
    imgs_pil = [Image.open(open(data_dir + s, "rb")).resize((299,299), Image.BILINEAR) for s in os.listdir(data_dir)]
    imgs = [np.asarray(img).astype("float32") for img in imgs_pil]
    x_arr = np.array(imgs)
    x_arr_tf = x_arr
    # TF InceptionV3 graph use [0, 255] scale image
    feed = {'FID_Inception_Net/ExpandDims:0': x_arr_tf}
    # This is identical to TF image transformation
    x_arr_torch = x_arr / 255. #(x_arr - 128) * 0.0078125
    x_torch = torch.from_numpy(x_arr_torch.transpose(0, 3, 1, 2)).float().cuda()
    
    model = inception_v3(pretrained=True, aux_logits=False, transform_input=False)
    if len(args.load_path) > 1:
        # default: pretrained/inception_v3_google.pth
        print("=> Get changed weight from %s" % args.load_path)
        try:
            model.load_state_dict(torch.load(args.load_path))
        except RuntimeError:
            pass
    model.cuda()
    model.eval()
    
    if x_torch.size(2) != 299:
        import torch.nn.functional as F
        x_torch = F.interpolate(x_torch,
                size=(299, 299),
                mode='bilinear',
                align_corners=False)
    features = model.get_feature(x_torch)
    feature_pytorch = features[-1].detach().cpu().numpy()
    if len(feature_pytorch.shape) == 4:
        feature_pytorch = feature_pytorch[:, :, 0, 0]
    
    inception_path = check_or_download_inception("pretrained")
    with tf.gfile.FastGFile("pretrained/classify_image_graph_def.pb", 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        _ = tf.import_graph_def( graph_def, name='FID_Inception_Net')
        
    config = tf.ConfigProto()
    config.gpu_options.allow_growth = True
    sess = tf.Session(config=config)
    
    layername = "FID_Inception_Net/pool_3:0"
    layer = sess.graph.get_tensor_by_name(layername)
    ops = layer.graph.get_operations()
    for op_idx, op in enumerate(ops):
        for o in op.outputs:
            shape = o.get_shape()
            if shape._dims != []:
                shape = [s.value for s in shape]
                new_shape = []
                for j, s in enumerate(shape):
                    if s == 1 and j == 0:
                        new_shape.append(None)
                    else:
                        new_shape.append(s)
                # print(o.name, shape, new_shape)
                o.__dict__['_shape_val'] = tf.TensorShape(new_shape)
    
    tensor_list = [n.name for n in tf.get_default_graph().as_graph_def().node]
    
    target_layer_names = ["FID_Inception_Net/Mul:0", "FID_Inception_Net/conv:0", "FID_Inception_Net/pool_3:0"]
    target_layers = [sess.graph.get_tensor_by_name(l) for l in target_layer_names]
    
    sess.run(tf.global_variables_initializer())
    res = sess.run(target_layers, feed)
    x_tf = res[0]
    feature_tensorflow = res[-1][:, 0, 0, :]
    
    print("=> Pytorch pool3:")
    print(feature_pytorch[0][:6])
    print("=> Tensorflow pool3:")
    print(feature_tensorflow[0][:6])
    print("=> Mean abs difference")
    print(np.abs(feature_pytorch - feature_tensorflow).mean())
    
    def get_tf_layer(name):
        return sess.run(sess.graph.get_tensor_by_name(name + ':0'), feed)
    

    result:

    => Pytorch pool3:
    [0.42730308 0.00819586 0.27243498 0.2880235  0.10205843 0.05626289]
    => Tensorflow pool3:
    [0.13085267 0.5260418  0.22931635 0.02919772 0.2439549  0.50965333]
    => Mean abs difference
    0.34774715
    
    opened by AtlantixJJ 10
  • Error while calculating FID score for a small dataset

    Error while calculating FID score for a small dataset

    I'm calculating FID score for a small dataset - folder1 - 1341 images and folder 2 - 1000 images with parameters as --dims 768. Please see below for the error-

    100%|██████████| 10/10 [00:11<00:00, 1.11s/it]Warning: batch size is bigger than the data size. Setting batch size to data size


    ValueError Traceback (most recent call last) /usr/lib/python3.6/runpy.py in run_module(mod_name, init_globals, run_name, alter_sys) 203 run_name = mod_name 204 if alter_sys: --> 205 return _run_module_code(code, init_globals, run_name, mod_spec) 206 else: 207 # Leave the sys module alone

    7 frames /content/drive/My Drive/pytorch-fid-master/pytorch_fid/fid_score.py in get_activations(files, model, batch_size, dims, cuda) 100 pred_arr = np.empty((len(files), dims)) 101 --> 102 for i in tqdm(range(0, len(files), batch_size)): 103 start = i 104 end = i + batch_size

    ValueError: range() arg 3 must not be zero /usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py:2590: UserWarning: Unknown failure executing module: <pytorch_fid> warn('Unknown failure executing module: <%s>' % mod_name)

    opened by sagarkora 8
  • Have you compared it with the official TF implementation?

    Have you compared it with the official TF implementation?

    Hi, Have you compared it with the official TF implementation? Are the score significantly different? Are there pitfalls one should be about when using it that might get significantly different results than TF?

    Thanks, and thank you for writing that code.

    opened by Vermeille 7
  • [ValueError: axes don't match array] in the

    [ValueError: axes don't match array] in the "imgs.transpose((0, 3, 1, 2))" line

    Hello all, I used following command to calculate FID distance from two different folders. : ./fid_score.py /data/vision/pytorch-CycleGAN-and-pix2pix/datasets/synthia2kitti_ex1/testAB /data/vision/pytorch-CycleGAN-and-pix2pix/datasets/synthia2kitti_ex1/testB

    I've come across the error as follows. :Traceback (most recent call last): File "./fid_score.py", line 262, in args.dims) File "./fid_score.py", line 249, in calculate_fid_given_paths dims, cuda) File "./fid_score.py", line 223, in _compute_statistics_of_path imgs = imgs.transpose((0, 3, 1, 2)) ValueError: axes don't match array

    Please give me any comment. Thank you.

    opened by flipflop98 7
  • FID score of a dataset to itself does not give 0

    FID score of a dataset to itself does not give 0

    By definition, the FID score has a minimum bound of 0 but I got -4.8e-5 when I calculated the distance of a dataset to itself. Obviously, this is very close to zero. Nevertheless, can't think of a reason why would this be non-zero from your implementation. Any ideas?

    opened by ogencoglu 6
  • Question about input normalization

    Question about input normalization

    Hi,

    First of all, thanks for the nice implementation. Could you explain why you normalized inputs like here? The normalized results seems different from below.

    # mean and var for ImageNet
    mean = (0.485, 0.456, 0.406)
    std = (0.229, 0.224, 0.225)
    
    transform = transforms.Compose([
                        transforms.ToTensor(),
                        transforms.Normalize(mean=mean, std=std)])
    

    Best, Yunjey

    opened by yunjey 6
  • If the size of the input image is different from the set value, what should be done

    If the size of the input image is different from the set value, what should be done

    Since I want to change the size of the network input image, for example, the input is a rectangular image, do I need to retrain the parameters? If so, how to achieve this? If not, should we just change 229229 to the required size?just like 256176。Hope someone can help to answer, thank you very much

    opened by activate-an 5
  • Installation doesn't work

    Installation doesn't work

    Hi,

    first and foremost thank you very much for your work! I have an installation problem: The shell gives me following output: ERROR: Command errored out with exit status 1: command: 'C:\Users\bibi\anaconda3\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\bibi\AppData\Local\Temp\pip-install-ma3x6o5s\pytorch-fid_20cb67d1201c4ad9ae07a1faa01a3199\setup.py'"'"'; file='"'"'C:\Users\bibi\AppData\Local\Temp\pip-install-ma3x6o5s\pytorch-fid_20cb67d1201c4ad9ae07a1faa01a3199\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\bibi\AppData\Local\Temp\pip-pip-egg-info-y5m73gya' cwd: C:\Users\bibi\AppData\Local\Temp\pip-install-ma3x6o5s\pytorch-fid_20cb67d1201c4ad9ae07a1faa01a3199 Complete output (9 lines): Traceback (most recent call last): File "", line 1, in File "C:\Users\bibi\AppData\Local\Temp\pip-install-ma3x6o5s\pytorch-fid_20cb67d1201c4ad9ae07a1faa01a3199\setup.py", line 34, in packages=setuptools.find_packages(where='src/'), File "C:\Users\bibi\anaconda3\lib\site-packages\setuptools_init_.py", line 64, in find convert_path(where), File "C:\Users\bibi\anaconda3\lib\distutils\util.py", line 112, in convert_path raise ValueError("path '%s' cannot end with '/'" % pathname) ValueError: path 'src/' cannot end with '/'

    ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

    Can you please help?

    opened by pocaha 5
  • Why choose num_classes=1008 when creating the model?

    Why choose num_classes=1008 when creating the model?

    Hello,

    Is there a special reason as to why you choose num_classes=1008 instead of using the default value 1000 when creating the model. Refer to: https://github.com/mseitzer/pytorch-fid/blob/master/pytorch_fid/inception.py#L193

    Best, Jan

    opened by jvhoffbauer 5
  • Dramatically speed up FID computation using DataLoader for asynchronous data loading

    Dramatically speed up FID computation using DataLoader for asynchronous data loading

    Hello,

    By using Pytorch's data loading pipeline, we can benefit of parallel and asynchronous data loading.

    With the same batch size, My FID computation times goes from ~18 mins to ~3 mins.

    opened by Vermeille 5
  • added function to compute likelihood between sample and target distri…

    added function to compute likelihood between sample and target distri…

    Hello, It would be useful to compute the likelihood of a single sample under a specified target distribution (LLID) and I believe this is a significant missing feature.

    I create a pull request to add the implementation of a function to compute the likelihood between a sample under a target distribution.

    Usage

    The usage is very similar to when you compute the FID score between two Gaussian multivariate distribution. To compute the LLID score between a sample and a dataset, where images of the sample and the target dateset are contained in an individual folder:

    python -m pytorch_llid path/to/target/dataset path/to/sample

    opened by LucaCellamare 0
  • OSError: image file is truncated

    OSError: image file is truncated

    I double checked the dataset and there are no broken images.

    Searching on Baidu, It seems that the error occurs when loading large images. As a result, the source code has to be modified to solve this problem.

    opened by Rem-Yin 1
  • The calculation of fid is too slow.

    The calculation of fid is too slow.

    Thanks for sharing.

    The calculation of fid is too slow. The calculation for linalg.sqrtm(sigma1.dot(sigma2), disp=False) takes several minutes. How to speed it?

    opened by liuchanglab 1
  • Implement Inception Score

    Implement Inception Score

    Hi, Recently, there is some research still evaluating their model by Inception Score, and issue #20 also mentions it. I create a pull request to add the implementation of computing the inception score.

    Change

    • implement the calculate_inception_score . In this implementation, I used the weight of Inception-V3 from the Pytorch team to predict the probs of images (to validate my implementation in the test phase).
    • add --inception-score as store_true argument and --splits with default is 10 (just like the original code).

    Test

    I tested it with CIFAR-10 (pytorch-fid/src/pytorch-fid/test.py), in this paper, the mean and std of CIFAR-10 are 9.737 and 0.1489. In this PR, the test result return inception score 9.6728±0.1499. (the difference may come from Torch or NumPy version).

    Problems

    • While testing, I found out that IS is really hard to use, not only because of the issues from the paper above. Because the predicted results then have to be split into groups, the arrangement of data is also a problem (if shuffle=True in dataloader, the result will be different).
    • Different from FID, IS is computed individually for each dataset, so the input only needs one path. The function is taking the first path as input and ignoring the second, I wonder is there any more flexible way to fix this.
    opened by GiangHLe 0
  • FID for COCO alike generated images

    FID for COCO alike generated images

    Hi, Thanks for your work, I had one doubt regarding FID Score. Now if I want to train the GAN to generate the COCO alike datasets, then can I use the pretrained Inception net for that?

    Logically, it should not be the case as the backbone training is itself very different from COCO like datasets.

    Also, instead of InceptionNet, can I take any backbone trained on object detection on COCO or VOC and then take that as my network for the evaluation of the same?

    Thanking you in advance

    question 
    opened by shreejalt 1
Releases(fid_weights)
  • fid_weights(May 27, 2019)

    This release is a placeholder to host a specific version of Inception weights.

    Namely, the original FID score implementation uses other Tensorflow weights (http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz) than the Inception model available in Pytorch (http://download.tensorflow.org/models/image/imagenet/inception-v3-2016-03-01.tar.gz). The weights available here are the original FID weights converted to Pytorch.

    Source code(tar.gz)
    Source code(zip)
    pt_inception-2015-12-05-6726825d.pth(91.19 MB)
Code for the paper "Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness"

DU-VAE This is the pytorch implementation of the paper "Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness" Acknowledgement

Dazhong Shen 4 Oct 19, 2022
Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Facebook Research 68 Dec 29, 2022
Official Pytorch Implementation of Relational Self-Attention: What's Missing in Attention for Video Understanding

Relational Self-Attention: What's Missing in Attention for Video Understanding This repository is the official implementation of "Relational Self-Atte

mandos 43 Dec 07, 2022
Finite-temperature variational Monte Carlo calculation of uniform electron gas using neural canonical transformation.

CoulombGas This code implements the neural canonical transformation approach to the thermodynamic properties of uniform electron gas. Building on JAX,

FermiFlow 9 Mar 03, 2022
Predict the latency time of the deep learning models

Deep Neural Network Prediction Step 1. Genernate random parameters and Run them sequentially : $ python3 collect_data.py -gp -ep -pp -pl pooling -num

QAQ 1 Nov 12, 2021
PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, wav2lip, picture repair, image editing, photo2cartoon, image style transfer, and so on.

English | 简体中文 PaddleGAN PaddleGAN provides developers with high-performance implementation of classic and SOTA Generative Adversarial Networks, and s

6.4k Jan 09, 2023
Codebase for Image Classification Research, written in PyTorch.

pycls pycls is an image classification codebase, written in PyTorch. It was originally developed for the On Network Design Spaces for Visual Recogniti

Facebook Research 2k Jan 01, 2023
Object-aware Contrastive Learning for Debiased Scene Representation

Object-aware Contrastive Learning Official PyTorch implementation of "Object-aware Contrastive Learning for Debiased Scene Representation" by Sangwoo

43 Dec 14, 2022
Azion the best solution of Edge Computing in the world.

Azion Edge Function docker action Create or update an Edge Functions on Azion Edge Nodes. The domain name is the key for decision to a create or updat

8 Jul 16, 2022
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition This repository contains code for the CVPR2021 paper "Patch-NetV

QVPR 368 Jan 06, 2023
Fast and Easy Infinite Neural Networks in Python

Neural Tangents ICLR 2020 Video | Paper | Quickstart | Install guide | Reference docs | Release notes Overview Neural Tangents is a high-level neural

Google 1.9k Jan 09, 2023
FewBit — a library for memory efficient training of large neural networks

FewBit FewBit — a library for memory efficient training of large neural networks. Its efficiency originates from storage optimizations applied to back

24 Oct 22, 2022
Deep learning with dynamic computation graphs in TensorFlow

TensorFlow Fold TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation graph

1.8k Dec 28, 2022
Training neural models with structured signals.

Neural Structured Learning in TensorFlow Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured

955 Jan 02, 2023
This repository contains the code for the ICCV 2019 paper "Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics"

Occupancy Flow This repository contains the code for the project Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics. You can find detail

189 Dec 29, 2022
PyTorch Implement for Path Attention Graph Network

SPAGAN in PyTorch This is a PyTorch implementation of the paper "SPAGAN: Shortest Path Graph Attention Network" Prerequisites We prefer to create a ne

Yang Yiding 38 Dec 28, 2022
Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation

Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation Introduction This is a PyTorch

XMed-Lab 30 Sep 23, 2022
基于Paddle框架的arcface复现

arcface-Paddle 基于Paddle框架的arcface复现 ArcFace-Paddle 本项目基于paddlepaddle框架复现ArcFace,并参加百度第三届论文复现赛,将在2021年5月15日比赛完后提供AIStudio链接~敬请期待 参考项目: InsightFace Padd

QuanHao Guo 16 Dec 15, 2022
Code for "The Box Size Confidence Bias Harms Your Object Detector"

The Box Size Confidence Bias Harms Your Object Detector - Code Disclaimer: This repository is for research purposes only. It is designed to maintain r

Johannes G. 24 Dec 07, 2022
Code for the paper: Hierarchical Reinforcement Learning With Timed Subgoals, published at NeurIPS 2021

Hierarchical reinforcement learning with Timed Subgoals (HiTS) This repository contains code for reproducing experiments from our paper "Hierarchical

Autonomous Learning Group 21 Dec 03, 2022