modelvshuman is a Python library to benchmark the gap between human and machine vision

Overview

modelvshuman: Does your model generalise better than humans?

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with high-quality human comparison data.

πŸ† Benchmark

The top-10 models are listed here; training dataset size is indicated in brackets. Additionally, standard ResNet-50 is included as the last entry of the table for comparison. Model ranks are calculated across the full range of 52 models that we tested. If your model scores better than some (or even all) of the models here, please open a pull request and we'll be happy to include it here!

Most human-like behaviour

winner model accuracy difference ↓ observed consistency ↑ error consistency ↑ mean rank ↓
πŸ₯‡ CLIP: ViT-B (400M) .023 .758 .281 1
πŸ₯ˆ SWSL: ResNeXt-101 (940M) .028 .752 .237 3.67
πŸ₯‰ BiT-M: ResNet-101x1 (14M) .034 .733 .252 4
πŸ‘ BiT-M: ResNet-152x2 (14M) .035 .737 .243 4.67
πŸ‘ ViT-L (1M) .033 .738 .222 6.67
πŸ‘ BiT-M: ResNet-152x4 (14M) .035 .732 .233 7.33
πŸ‘ BiT-M: ResNet-50x1 (14M) .042 .718 .240 9
πŸ‘ BiT-M: ResNet-50x3 (14M) .040 .726 .228 9
πŸ‘ ViT-L (14M) .035 .744 .206 9.67
πŸ‘ SWSL: ResNet-50 (940M) .041 .727 .211 11.33
... standard ResNet-50 (1M) .087 .665 .208 29

Highest out-of-distribution robustness

winner model OOD accuracy ↑ rank ↓
πŸ₯‡ ViT-L (14M) .733 1
πŸ₯ˆ CLIP: ViT-B (400M) .708 2
πŸ₯‰ ViT-L (1M) .706 3
πŸ‘ SWSL: ResNeXt-101 (940M) .698 4
πŸ‘ BiT-M: ResNet-152x2 (14M) .694 5
πŸ‘ BiT-M: ResNet-152x4 (14M) .688 6
πŸ‘ BiT-M: ResNet-101x3 (14M) .682 7
πŸ‘ BiT-M: ResNet-50x3 (14M) .679 8
πŸ‘ SimCLR: ResNet-50x4 (1M) .677 9
πŸ‘ SWSL: ResNet-50 (940M) .677 10
... standard ResNet-50 (1M) .559 31

πŸ”§ Installation

Simply clone the repository to a location of your choice and follow these steps:

  1. Set the repository home path by running the following from the command line:

    export MODELVSHUMANDIR=/absolute/path/to/this/repository/
    
  2. Install package (remove the -e option if you don't intend to add your own model or make any other changes)

    pip install -e .
    

πŸ”¬ User experience

Simply edit examples/evaluate.py as desired. This will test a list of models on out-of-distribution datasets, generating plots. If you then compile latex-report/report.tex, all the plots will be included in one convenient PDF report.

🐫 Model zoo

The following models are currently implemented:

If you e.g. add/implement your own model, please make sure to compute the ImageNet accuracy as a sanity check.

How to load a model

If you just want to load a model from the model zoo, this is what you can do:

    # loading a PyTorch model from the zoo
    from modelvshuman.models.pytorch.model_zoo import InfoMin
    model = InfoMin("InfoMin")

    # loading a Tensorflow model from the zoo
    from modelvshuman.models.tensorflow.model_zoo import efficientnet_b0
    model = efficientnet_b0("efficientnet_b0")
How to list all available models

All implemented models are registered by the model registry, which can then be used to list all available models of a certain framework with the following method:

    from modelvshuman import models
    
    print(models.list_models("pytorch"))
    print(models.list_models("tensorflow"))
How to add a new model

Adding a new model is possible for standard PyTorch and TensorFlow models. Depending on the framework (pytorch / tensorflow), open modelvshuman/models//model_zoo.py. Here, you can add your own model with a few lines of code - similar to how you would load it usually. If your model has a custom model definition, create a new subdirectory called modelvshuman/models//my_fancy_model/fancy_model.py which you can then import from model_zoo.py via from .my_fancy_model import fancy_model.

πŸ“ Datasets

In total, 17 datasets with human comparison data collected under highly controlled laboratory conditions are available.

Twelve datasets correspond to parametric or binary image distortions. Top row: colour/grayscale, contrast, high-pass, low-pass (blurring), phase noise, power equalisation. Bottom row: opponent colour, rotation, Eidolon I, II and III, uniform noise. noise-stimuli

The remaining five datasets correspond to the following nonparametric image manipulations: sketch, stylized, edge, silhouette, texture-shape cue conflict. nonparametric-stimuli

How to load a dataset

Similarly, if you're interested in just loading a dataset, you can do this via:

   from modelvshuman.datasets import sketch      
   dataset = sketch(batch_size=16, num_workers=4)
How to list all available datasets
    from modelvshuman import datasets
    
    print(list(datasets.list_datasets().keys()))

πŸ’³ Credit

We collected psychophysical data ourselves, but we used existing image dataset sources. 12 datasets were obtained from Generalisation in humans and deep neural networks. 3 datasets were obtained from ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. Additionally, we used 1 dataset from Learning Robust Global Representations by Penalizing Local Predictive Power (sketch images from ImageNet-Sketch) and 1 dataset from ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness (stylized images from Stylized-ImageNet).

We thank all model authors and repository maintainers for providing the models described above.

Comments
  • Question about self-supervised models

    Question about self-supervised models

    Hello, thanks for the great toolbox~

    I'm little confusion about the results on self-supervised models, like SimCLR. It doesn't have the class-specifical classifier on ImageNet-1k. So how do you load the classifier weight? load an extra-trained classifier (like linear-probing protocol) or a fully-trained network (like end-to-end fine-tuning protocol)?

    And another question: If I just want to test the shape & texture accuracy, mentioned in Intriguing Properties of Vision Transformers, which dataset type should I choose?

    Thank you very much~

    opened by pengzhiliang 5
  • dataset path?

    dataset path?

    Hi --- first thank you a lot for this useful dataset and API!

    I'm trying to load the datasets using the following code, but got an error msg saying dataset sketch path not found: model-vs-human/datasets/sketch/dnn/

    Does this mean that I should download the sketch image dataset myself from the original source and put under this path? Little more documentation on how the image dataset should be structured will be very useful!

    from modelvshuman.datasets import sketch,      
    dataset = sketch(batch_size=16, num_workers=4)
    

    Thank you!

    opened by ahnchive 2
  • Question Regarding Human Raw Dataset

    Question Regarding Human Raw Dataset

    Hi! Thank you for providing a valuable dataset.

    As I dig into the raw data which contains human annotations, some questions raised in my mind. Below is how I analyzed the raw dataset in contrast experiment

    For an image named 0580_cop_dnn_c05_bicycle_10_n03792782_10129.png, I extracted rows in which image_id matches 'c05_bicycle_10_n03792782_10129.png'. Then, I got the following 4 rows. Here, human_annotation is a simple concatenation of all csv files in contrast experiment. image

    Below is the visualization of image 0580_cop_dnn_c05_bicycle_10_n03792782_10129.png image

    Although the image is not very clear, I do not fully agree with the human predictions. (They predicted 'clock', 'oven', 'bear', 'keyboard' -- It is very clear that the figure is not a keyboard.) Is there anything wrong or I missed in the analysis?

    Thanks,

    opened by jiyounglee-0523 2
  • Request for data to reproduce figures

    Request for data to reproduce figures

    Very interesting work and thanks very much for releasing it publicly. We are working on extending some of your results/studies, could you please provide more information related to Figure 2?

    Figure 2 compares several models, but they are not all labeled. I am interested in finding out the accuracy-distortion tradeoff for each model. The figure shows this information at a coarse level, such as to compare all self-supervised models, adversarially trained models, etc. Would be perfect if you could provide the model-name (corresponding to your model zoo) and its corresponding performance at various distortion levels for the 12 distortion types that are considered.

    Thanks again, looking forward to hearing from you!

    opened by vihari 2
  • not able to reproduce the results

    not able to reproduce the results

    Firstly thanks for sharing the interesting paper and code!

    I followed the installation steps but running python examples/evaluate.py (didn't edit anything) resulting the following strange error. Could the authors give some insights on the reasons?

    Plotting accuracy for dataset colour The following model(s) were not found: alexnet List of possible models in this dataset: ['bagnet33' 'resnet50' 'simclr_resnet50x1'] The following model(s) were not found: subject-* List of possible models in this dataset: ['bagnet33' 'resnet50' 'simclr_resnet50x1'] Traceback (most recent call last): File "examples/evaluate.py", line 28, in run_plotting() File "examples/evaluate.py", line 18, in run_plotting figure_directory_name = figure_dirname) File "/home/eric/model_vs_human/modelvshuman/plotting/plot.py", line 108, in plot result_dir=result_dir) File "/home/eric/model_vs_human/modelvshuman/plotting/plot.py", line 744, in plot_accuracy result_dir=result_dir, plot_type="accuracy") File "/home/eric/model_vs_human/modelvshuman/plotting/plot.py", line 772, in plot_general_analyses experiment=e) File "/home/eric/model_vs_human/modelvshuman/plotting/analyses.py", line 254, in get_result_df r = self.analysis(subdat) File "/home/eric/model_vs_human/modelvshuman/plotting/analyses.py", line 284, in analysis self._check_dataframe(df) File "/home/eric/model_vs_human/modelvshuman/plotting/analyses.py", line 24, in _check_dataframe assert len(df) > 0, "empty dataframe" AssertionError: empty dataframe

    opened by largenn 2
  • Question about shape bias

    Question about shape bias

    Could you please provide the formula to get the shape bias? Currently, I can successfully run this repo with my own model, but I am confused about how the shape bias is abtained. I will be very gratefully if you can elaborate it. Thanks!

    opened by YuanLiuuuuuu 1
  • Difficulties loading adversarially trained models

    Difficulties loading adversarially trained models

    I didn't succeed in loading the adversarially trained models. run_evaluation results in the following error:

    HTTP Error 403: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.

    Any idea how to fix this?

    opened by lukasShuber 1
  • Update license info

    Update license info

    In https://github.com/bethgelab/model-vs-human/blob/master/setup.cfg#L18 we state LGPLv3 but we should instead refer to https://github.com/bethgelab/model-vs-human/tree/master/licenses

    documentation 
    opened by rgeirhos 1
  • Clip Improvements

    Clip Improvements

    Two changes:

    1. Explicitly cast the images to device (previous code caused problems when training on google colab);
    2. Compute the zero shot weights needed by CLIP only once and recycle them for subsequent batches. Since the classes remain fixed, computing this for every batch is not necessary and greatly slows down experiments.
    opened by yurigalindo 1
  • Feature request: Machine readable results, e.g. CSV

    Feature request: Machine readable results, e.g. CSV

    Currently, the toolbox saves Latex tables and plots with the resulting accuracies and error consistencies. For custom plotting routines and similar, it would be great to have these numbers additionally in an easy-to-parse format, e.g. CSV or JSON.

    enhancement 
    opened by dekuenstle 0
  • Feature request: Simpler loading of custom models

    Feature request: Simpler loading of custom models

    Using your toolbox with the built-in models is straightforward, but we would like to compare some custom pytorch models. It would be great to have a routine to add these models (i.e. subclasses of nn.Module) to the toolbox registry from your own script. If this is already possible, it would be great if you could share an example.

    Currently, we add the model inside the toolbox's files which makes extensions complicated and redundant (e.g. name of model in the path, the function name, the plotting routine).

    Thanks David

    enhancement 
    opened by dekuenstle 2
  • BiT models via timm?

    BiT models via timm?

    had difficulties obtaining the BiT models via pytorch image models. I then used:

    e.g.,

    import timm m = timm.create_model('resnetv2_152x12_bitm', pretrained=True)

    in pytorch model_zoo.py.

    This worked perfectly.

    opened by lukasShuber 0
Owner
Bethge Lab
Perceiving Neural Networks
Bethge Lab
A Tensorflow implementation of CapsNet based on Geoffrey Hinton's paper Dynamic Routing Between Capsules

CapsNet-Tensorflow A Tensorflow implementation of CapsNet based on Geoffrey Hinton's paper Dynamic Routing Between Capsules Notes: The current version

Huadong Liao 3.8k Dec 29, 2022
AAAI 2022 paper - Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction

AT-BMC Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction (AAAI 2022) Paper Prerequisites Install pac

16 Nov 26, 2022
Graph-based community clustering approach to extract protein domains from a predicted aligned error matrix

Using a predicted aligned error matrix corresponding to an AlphaFold2 model , returns a series of lists of residue indices, where each list corresponds to a set of residues clustering together into a

Tristan Croll 24 Nov 23, 2022
Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction

Welcome to Barlow Barlow is a tool for identifying the failure modes for a given neural network. To achieve this, Barlow first creates a group of imag

Sahil Singla 33 Dec 05, 2022
The aim of this project is to build an AI bot that can play the Wordle game, or more generally Squabble

Wordle RL The aim of this project is to build an AI bot that can play the Wordle game, or more generally Squabble I know there are more deterministic

Aditya Arora 3 Feb 22, 2022
XViT - Space-time Mixing Attention for Video Transformer

XViT - Space-time Mixing Attention for Video Transformer This is the official implementation of the XViT paper: @inproceedings{bulat2021space, title

Adrian Bulat 33 Dec 23, 2022
Latex code for making neural networks diagrams

PlotNeuralNet Latex code for drawing neural networks for reports and presentation. Have a look into examples to see how they are made. Additionally, l

Haris Iqbal 18.6k Jan 01, 2023
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

AugMax: Adversarial Composition of Random Augmentations for Robust Training Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, an

VITA 112 Nov 07, 2022
[SIGGRAPH 2020] Attribute2Font: Creating Fonts You Want From Attributes

Attr2Font Introduction This is the official PyTorch implementation of the Attribute2Font: Creating Fonts You Want From Attributes. Paper: arXiv | Rese

Yue Gao 200 Dec 15, 2022
Official codes for the paper "Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech"

ResDAVEnet-VQ Official PyTorch implementation of Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech What is in this repo? M

Wei-Ning Hsu 21 Aug 23, 2022
πŸ– Keras Implementation of Painting outside the box

Keras implementation of Image OutPainting This is an implementation of Painting Outside the Box: Image Outpainting paper from Standford University. So

Bendang 1.1k Dec 10, 2022
Official implementation of Sparse Transformer-based Action Recognition

STAR Official implementation of S parse T ransformer-based A ction R ecognition Dataset download NTU RGB+D 60 action recognition of 2D/3D skeleton fro

Chonghan_Lee 15 Nov 02, 2022
Prevent `CUDA error: out of memory` in just 1 line of code.

🐨 Koila Koila solves CUDA error: out of memory error painlessly. Fix it with just one line of code, and forget it. πŸš€ Features πŸ™… Prevents CUDA error

RenChu Wang 1.7k Jan 02, 2023
DumpSMBShare - A script to dump files and folders remotely from a Windows SMB share

DumpSMBShare A script to dump files and folders remotely from a Windows SMB shar

Podalirius 178 Jan 06, 2023
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 08, 2023
Train Dense Passage Retriever (DPR) with a single GPU

Gradient Cached Dense Passage Retrieval Gradient Cached Dense Passage Retrieval (GC-DPR) - is an extension of the original DPR library. We introduce G

Luyu Gao 92 Jan 02, 2023
DIVeR: Deterministic Integration for Volume Rendering

DIVeR: Deterministic Integration for Volume Rendering This repo contains the training and evaluation code for DIVeR. Setup python 3.8 pytorch 1.9.0 py

64 Dec 27, 2022
Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110.06922). Our implementations are built on top of MMdetection3D.

Wang, Yue 539 Jan 07, 2023
The AugNet Python module contains functions for the fast computation of image similarity.

AugNet AugNet: End-to-End Unsupervised Visual Representation Learning with Image Augmentation arxiv link In our work, we propose AugNet, a new deep le

Ming 74 Dec 28, 2022
γ€ŠDeep Single Portrait Image Relighting》(ICCV 2019)

Ratio Image Based Rendering for Deep Single-Image Portrait Relighting [Project Page] This is part of the Deep Portrait Relighting project. If you find

62 Dec 21, 2022