A new test set for ImageNet

Overview

ImageNetV2

The ImageNetV2 dataset contains new test data for the ImageNet benchmark. This repository provides associated code for assembling and working with ImageNetV2. The actual test sets are stored in a separate location.

ImageNetV2 contains three test sets with 10,000 new images each. Importantly, these test sets were sampled after a decade of progress on the original ImageNet dataset. This makes the new test data independent of existing models and guarantees that the accuracy scores are not affected by adaptive overfitting. We designed the data collection process for ImageNetV2 so that the resulting distribution is as similar as possible to the original ImageNet dataset. Our paper "Do ImageNet Classifiers Generalize to ImageNet?" describes ImageNetV2 and associated experiments in detail.

In addition to the three test sets, we also release our pool of candidate images from which the test sets were assembled. Each image comes with rich metadata such as the corresponding Flickr search queries or the annotations from MTurk workers.

The aforementioned paper also describes CIFAR-10.1, a new test set for CIFAR-10. It can be found in the following repository: https://github.com/modestyachts/CIFAR-10.1

Using the Dataset

Before explaining how the code in this repository was used to assemble ImageNetV2, we first describe how to load our new test sets.

Test Set Versions

There are currently three test sets in ImageNetV2:

  • Threshold0.7 was built by sampling ten images for each class among the candidates with selection frequency at least 0.7.

  • MatchedFrequency was sampled to match the MTurk selection frequency distribution of the original ImageNet validation set for each class.

  • TopImages contains the ten images with highest selection frequency in our candidate pool for each class.

In our code, we adopt the following naming convention: Each test set is identified with a string of the form

imagenetv2-<test-set-letter>-<revision-number>

for instance, imagenetv2-b-31. The Threshold0.7, MatchedFrequency, and TopImages have test set letters a, b, and c, respectively. The current revision numbers for the test sets are imagenetv2-a-44, imagenetv2-b-33, imagenetv2-c-12. We refer to our paper for a detailed description of these test sets and the review process underlying the different test set revisions.

Loading a Test Set

You can download the test sets from the following url: http://imagenetv2public.s3-website-us-west-2.amazonaws.com/. There is a link for each individual dataset and the ImageNet datasets must be decompressed before use.

To load the dataset, you can use the ImageFolder class in PyTorch on the extracted folder.

For instance, the following code loads the MatchedFrequency dataset:

from torchvision import datasets
datasets.ImageFolder(root='imagenetv2-matched-frequency')

Dataset Creation Pipeline

The dataset creation process has several stages outlined below. We describe the process here at a high level. If you have questions about any individual steps, please contact Rebecca Roelofs ([email protected]) and Ludwig Schmidt ([email protected]).

1. Downloading images from Flickr

In the first stage, we collected candidate images from the Flickr image hosting service. This requires a Flickr API key.

We ran the following command to search Flickr for images for a fixed list of wnids:

python flickr_search.py "../data/flickr_api_keys.json" \
                        --wnids "{wnid_list.json}" \
                        --max_images 200 \
                        --max_date_taken "2013-07-11"\
                        --max_date_uploaded "2013-07-11"\
                        --min_date_taken "2012-07-11"\
                        --min_date_uploaded "2012-07-11" 

We refer to the paper for more details on which Flickr search parameters we used to complete our candidate pool.

The script outputs search result metadata, including the Flickr URLs returned for each query. This search result metadata is written to /data/search_results/.

We then stored the images to an Amazon S3 bucket using

python download_images_from_flickr.py ../data/search_results/{search_result.json} --batch --parallel

2. Create HITs

Similar to the original ImageNet dataset, we used Amazon Mechanical Turk (MTurk) to filter our pool of candidates. The main unit of work on MTurk is a HIT (Human Intelligence Tasks), which in our case consists of 48 images with a target class. The format of our HITs was derived from the original ImageNet HITs.

To submit a HIT, we performed the following steps. They require a configured MTurk account.

  1. Encrypt all image URLs. This is necessary so that MTurk workers cannot identify whether an image is from the original validation set or our candidate pool by the source URL. python encrypt_copy_objects.py imagenet2candidates_mturk --strip_string ".jpg" --pywren
  2. Run the image consistency check. This checks that all of the new candidate images have been stored to S3 and have encrypted URLs. python image_consistency_check.py
  3. Generate hit candidates. This outputs a list of candidates to data/hit_candidates python generate_hit_candidates.py --num_wnids 1000
  4. Submit live HITs to MTurk. bash make_hits_live.sh sample_args_10.json <username> <latest_hit_candidate_file>
  5. Wait for prompt, and check if HTML file in the code/ directory looks correct.
  6. Type in the word LIVE to confirm submitting the HITs to MTurk (this costs money).

The HIT metadata created by make_hits_live.sh is stored in data/mturk/hit_data_live/.

After a set of HITs was submitted, you can check their progress using python3 mturk.py show_hit_progress --live --hit_file ../data/mturk/hit_data_live/{hit.json}

Additionally, we occasionally used the Jupyter notebook inspect_hit.ipynb to visually examine the HITs we created. The code for this notebook is stored in inspect_hit_notebook_code.py.

3. Remove near duplicates

Next, we removed near-duplicates from our candidate pool. We checked for near-duplicates both within our new test set and between our new test set and the original ImageNet dataset.

To find near-duplicates, we computed the 30 nearest neighbors for each candidate image in three different metrics: l2 distance on raw pixels, l2 distance on features extracted from a pre-trained VGG model (fc7), and SSIM (structural similarity).

The fc7 metric requires that each image is featurized using the same pre-trained VGG model. The scripts featurize.py, feaurize_test.py and featurize_candidates.py were used to perform the fc7 featurization.

Next, we computed the nearest neighbors for each image. Each metric has a different starting script:

  • run_near_duplicate_checker_dssim.py
  • run_near_duplicate_checker_l2.py
  • run_near_duplicate_checker_fc7.py

All three scripts use near_duplicate_checker.py for the underlying computation.

The script test_near_duplicate_checker.sh was used to run the unit tests for the near duplicate checker contained in test_near_duplicate_checker.py.

Finally, we manually reviewed the nearest neighbor pairs using the notebook review_near_duplicates.ipynb. The file review_near_duplicates_notebook_code.py contains the code for this notebook. The review output is saved in data/metadata/nearest_neighbor_reviews_v2.json. All near duplicates that we found are saved in data/metadata/near_duplicates.json.

4. Sample Dataset

After we created a labeled candidate pool, we sampled the new test sets.

We use a separate bash script to sample each version of the dataset, i.e sample_dataset_type_{a}.sh. Each script calls sample_dataset.py and initialize_dataset_review.py with the correct arguments. The file dataset_sampling.py contains helper functions for the sampling procedure.

5. Review Final Dataset

For quality control, we added a final reviewing step to our dataset creation pipeline.

  • initialize_dataset_review.py initializes the metadata needed for each dataset review round.

  • final_dataset_inspection.ipynb is used to manually review dataset versions.

  • final_dataset_inspection_notebook_code.py contains the code needed for the final_dataset_inspection.ipynb notebook.

  • review_server.py is the review server used for additional cleaning of the candidate pool. The review server starts a web UI that allows one to browse all candidate images for a particular class. In addition, a user can easily flag images that are problematic or near duplicates.

The review server can use local, downloaded images if started with the flag python3 review_server.py --use_local_images. In addition, you also need to launch a separate static file server for serving the images. There is a script in data for starting the static file server ./start_file_server.sh.

The local images can be downloaded using

  • download_all_candidate_images_to_cache.py
  • download_dataset_images.py

Data classes

Our code base contains a set of data classes for working with various aspects of ImageNetV2.

  • imagenet.py: This file contains the ImageNetData class that provides metadata about ImageNet (a list of classes, etc.) and functionality for loading images in the original ImageNet dataset. The scripts generate_imagenet_metadata_pickle.py are used to assemble generate_class_info_file.py some of the metadata in the ImageNetData class.

  • candidate_data.py contains the CandidateData class that provides easy access to all candidate images in ImageNetV2 (both image data and metadata). The metadata file used in this class comes from generate_candidate_metadata_pickle.py.

  • image_loader.py provides a unified interface to loading image data from either ImageNet or ImageNetV2.

  • mturk_data.py provides the MTurkData class for accessing the results from our MTurk HITs. The data used by this class is assembled via generate_mturk_data_pickle.

  • near_duplicate_data.py loads and processes the information about near-duplicates in ImageNetV2. Some of the metadata is prepared with generate_review_thresholds_pickle.py.

  • dataset_cache.py allows easy loading of our various test set revisions.

  • prediction_data.py provides functionality for loading the predictions of various classification models on our three test sets.

The functionality provided by each data class is documented via examples in the notebooks folder of this repository.

Evaluation Pipeline

Finally, we describe our evaluation pipeline for the PyTorch models. The main file is eval.py, which can be invoked as follows:

python eval.py --dataset $DATASET --models $MODELS

where $DATASET is one of

  • imagenet-validation-original (the original validation set)
  • imagenetv2-b-33 (our new MatchedFrequency test set)
  • imagenetv2-a-44 (our new Threshold.7 test set)
  • imagenetv2-c-12 (our new TopImages test set).

The $MODELS parameter is a comma-separated list of model names in the torchvision or Cadene/pretrained-models.pytorch repositories. Alternatively, $MODELS can also be all, in which case all models are evaluated.

License

Unless noted otherwise in individual files, the code in this repository is released under the MIT license (see the LICENSE file). The LICENSE file does not apply to the actual image data. The images come from Flickr which provides corresponding license information. They can be used the same way as the original ImageNet dataset.

BackgroundRemover lets you Remove Background from images and video with a simple command line interface

BackgroundRemover BackgroundRemover is a command line tool to remove background from video and image, made by nadermx to power https://BackgroundRemov

Johnathan Nader 1.7k Dec 30, 2022
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Welcome to AirSim AirSim is a simulator for drones, cars and more, built on Unreal Engine (we now also have an experimental Unity release). It is open

Microsoft 13.8k Jan 03, 2023
Code for the paper "Combining Textual Features for the Detection of Hateful and Offensive Language"

The repository provides the source code for the paper "Combining Textual Features for the Detection of Hateful and Offensive Language" submitted to HA

Sherzod Hakimov 3 Aug 04, 2022
CoRe: Contrastive Recurrent State-Space Models

CoRe: Contrastive Recurrent State-Space Models This code implements the CoRe model and reproduces experimental results found in Robust Robotic Control

Apple 21 Aug 11, 2022
Public implementation of "Learning from Suboptimal Demonstration via Self-Supervised Reward Regression" from CoRL'21

Self-Supervised Reward Regression (SSRR) Codebase for CoRL 2021 paper "Learning from Suboptimal Demonstration via Self-Supervised Reward Regression "

19 Dec 12, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 07, 2022
blind SQLIpy sebuah alat injeksi sql yang menggunakan waktu sql untuk mendapatkan sebuah server database.

blind SQLIpy Alat blind SQLIpy ini merupakan alat injeksi sql yang menggunakan metode time based blind sql injection metode tersebut membutuhkan waktu

Galih Anggoro Prasetya 4 Feb 24, 2022
Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators This is our Pytorch implementation for t

RUCAIBox 12 Jul 22, 2022
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)

Self-Supervised Pillar Motion Learning for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Self-Supervised Pillar Motion Learning for Autono

QCraft 101 Dec 05, 2022
On-device wake word detection powered by deep learning.

Porcupine Made in Vancouver, Canada by Picovoice Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening

Picovoice 2.8k Dec 29, 2022
frida工具的缝合怪

fridaUiTools fridaUiTools是一个界面化整理脚本的工具。新人的练手作品。参考项目ZenTracer,觉得既然可以界面化,那么应该可以把功能做的更加完善一些。跨平台支持:win、mac、linux 功能缝合怪。把一些常用的frida的hook脚本简单统一输出方式后,整合进来。并且

diveking 997 Jan 09, 2023
Rule based classification A hotel s customers dataset

Rule-based-classification-A-hotel-s-customers-dataset- Aim: Categorize new customers by segment and predict how much revenue they can generate This re

Şebnem 4 Jan 02, 2022
Image-to-image regression with uncertainty quantification in PyTorch

Image-to-image regression with uncertainty quantification in PyTorch. Take any dataset and train a model to regress images to images with rigorous, distribution-free uncertainty quantification.

Anastasios Angelopoulos 25 Dec 26, 2022
A library of multi-agent reinforcement learning components and systems

Mava: a research framework for distributed multi-agent reinforcement learning Table of Contents Overview Getting Started Supported Environments System

InstaDeep Ltd 463 Dec 23, 2022
Vision-Language Transformer and Query Generation for Referring Segmentation (ICCV 2021)

Vision-Language Transformer and Query Generation for Referring Segmentation Please consider citing our paper in your publications if the project helps

Henghui Ding 143 Dec 23, 2022
Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Manipulator Learning This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In par

STARS Laboratory 5 Dec 08, 2022
Official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"

RealBasicVSR [Paper] This is the official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution, arXiv". This repository contain

Kelvin C.K. Chan 566 Dec 28, 2022
Framework for joint representation learning, evaluation through multimodal registration and comparison with image translation based approaches

CoMIR: Contrastive Multimodal Image Representation for Registration Framework 🖼 Registration of images in different modalities with Deep Learning 🤖

Methods for Image Data Analysis - MIDA 55 Dec 09, 2022
A library for augmentation of a YOLO-formated dataset

YOLO Dataset Augmentation lib Инструкция по использованию этой библиотеки Запуск всех файлов осуществлять из консоли. GoogleCrawl_to_Dataset.py Это ск

Egor Orel 1 Dec 10, 2022
Keras Realtime Multi-Person Pose Estimation - Keras version of Realtime Multi-Person Pose Estimation project

This repository has become incompatible with the latest and recommended version of Tensorflow 2.0 Instead of refactoring this code painfully, I create

M Faber 769 Dec 08, 2022