This repository provides data for the VAW dataset as described in the CVPR 2021 paper titled "Learning to Predict Visual Attributes in the Wild"

Overview

Visual Attributes in the Wild (VAW)

This repository provides data for the VAW dataset as described in the CVPR 2021 Paper:

Learning to Predict Visual Attributes in the Wild

Khoi Pham, Kushal Kafle, Zhihong Ding, Zhe Lin, Quan Tran, Scott Cohen, Abhinav Shrivastava

VAW Main Image

Dataset Setup

Our VAW dataset is partly based on the annotations in the GQA and the VG-PhraseCut datasets.
Therefore, the images in the VAW dataset come from the Visual Genome dataset which is also the source of the images in the GQA and the VG-Phrasecut datasets. This section outlines the annotation format and basic statistics of our dataset.

Annotation Format

The annotations are found in data/train_part1.json, data/train_part2.json , data/val.json and data/test.json for train (split into two parts to circumvent github file-size limit) , validation and test splits in the VAW dataset respectively. The files consist of the following fields:

image_id: int (Image ids correspond to respective Visual Genome image ids)
instance_id: int (Unique instance ID)
instance_bbox: [x, y, width, height] (Bounding box co-ordinates for the instance)
instance_polygon: list of [x y] (List of vertices for segmentation polygon if exists else None)
object_name: str (Name of the object for the instance)
positive_attributes: list of str (Explicitly labeled positive attributes for the instance)
negative_attributes: list of str (Explicitly labeled negative attributes for the instance)

Download Images

The images can be downloaded from the Visual Genome website. The image_id field in our dataset corresponds to respective image ids in the v1.4 in the Visual Genome dataset.

Explore Data and View Live Demo

Head over to our accompanying website to explore the dataset. The website allows exploration of the VAW dataset by filtering our annotations by objects, positive attributes, or negative attributes in the train/val set. The website also shows interactive demo for our SCoNE algorithm as described in our paper.

Dataset Statistics

Basic Stats

Detail Stat
Number of Instances 260,895
Number of Total Images 72,274
Number of Unique Attributes 620
Number of Object Categories 2260
Average Annotation per Instance (Overall) 3.56
Average Annotation per Instance (Train) 3.02
Average Annotation per Instance (Val) 7.03

Evaluation

The evaluation script is provided in eval/evaluator.py. We also provide eval/eval.py as an example to show how to use the evaluation script. In particular, eval.py expects as input the followings:

  1. fpath_pred: path to the numpy array pred of your model prediction (shape (n_instances, n_class)). pred[i,j] is the predicted probability for attribute class j of instance i. We provide eval/pred.npy as a sample for this, which is the output of our best model (last row of table 2) in the paper.
  2. fpath_label: path to the numpy array gt_label that contains the groundtruth label of all instances in the test set (shape (n_instances, n_class)). gt_label[i,j] equals 1 if instance i is labeled positive with attribute j, equals 0 if it is labeled negative with attribute j, and equals 2 if it is unlabeled for attribute j. We provide eval/gt_label.npy as a sample for this, which we have created from data/test.json.
  3. Other files in folder data which have been set with default values in eval/eval.py.

From the eval folder, run the evaluation script as follows:

python eval.py --fpath_pred pred.npy --fpath_label gt_label.npy

We recently updated the grouping of attributes, So, there is a small discrepancy between the scores of our eval/pred.npy versus the numbers reported in the paper on each attribute group. A detailed attribute-wise breakdown will also be saved in a format shown in eval/output_detailed.txt.

Citation

Please cite our CVPR 2021 paper if you use the VAW dataset or the SCoNE algorithm in your work.

@InProceedings{Pham_2021_CVPR,
    author    = {Pham, Khoi and Kafle, Kushal and Lin, Zhe and Ding, Zhihong and Cohen, Scott and Tran, Quan and Shrivastava, Abhinav},
    title     = {Learning To Predict Visual Attributes in the Wild},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {13018-13028}
}

Disclaimer and Contact

This dataset contains objects labeled with a variety of attributes, including those applied to people. Datasets and their use are the subject of important ongoing discussions in the AI community, especially datasets that include people, and we hope to play an active role in those discussions. If you have any feedback regarding this dataset, we welcome your input at [email protected]

You might also like...
PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop.
PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop.

VoiceLoop PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop. VoiceLoop is a n

Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.
Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

This is the official repo for TransFill:  Reference-guided Image Inpainting by Merging Multiple Color and Spatial Transformations at CVPR'21. According to some product reasons, we are not planning to release the training/testing codes and models. However, we will release the dataset and the scripts to prepare the dataset. Generative Query Network (GQN) in PyTorch as described in
Generative Query Network (GQN) in PyTorch as described in "Neural Scene Representation and Rendering"

Update 2019/06/24: A model trained on 10% of the Shepard-Metzler dataset has been added, the following notebook explains the main features of this mod

Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.
Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

A pure PyTorch implementation of the loss described in "Online Segment to Segment Neural Transduction"

ssnt-loss ℹ️ This is a WIP project. the implementation is still being tested. A pure PyTorch implementation of the loss described in "Online Segment t

Repository for the paper
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Repository of our paper 'Refer-it-in-RGBD' in CVPR 2021
Repository of our paper 'Refer-it-in-RGBD' in CVPR 2021

Refer-it-in-RGBD This is the repository of our paper 'Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD Images' in CVPR 2021 Pape

Comments
  • Attribute super-class

    Attribute super-class

    Hi, Thank you for releasing the attribute annotations. A am very interested in the dataset. Are you also planning to release the superclass list of attributes from the paper (the Class imbalance and Attribute types)? And could you provide your evaluation code to reproduce your results and use the dataset?

    Best, Maria

    question 
    opened by mabravo641 1
  • Inference details

    Inference details

    Hi @kushalkafle, thanks for your great works of VAW and LSA. And I have some questions about the inference details of the SCoNE and TAP. During inference, For SCoNE, did you crop out the object region first and then evaluate the precision of the method for each bounding box? For TAP and OpenTAP, did you just input the test image and multi objects with bounding boxes, then the model will output the attributes of each object? I wonder if the above conjectures match the real experimental design. Looking forward to your reply and thanks in advance!

    opened by waveboo 0
  • object name embedding

    object name embedding

    Hi, I am a little confused about the object embedding procedure. As mentioned in the paper, GloVe 100-d word embeddings are used as the object name embedding. However, some of the object names are not contained in the Glove embeddings. How to tackle these names? For example, 'american flag', "boy's arm", 'two suitcases', 'computer keyboard', 'larger horse', 'living room wall', 'navy blue shirt', 'of the aisle', 'hotdog bun', 'train station', 'skull picture', 'disney princess', 'neck tie'.

    Thanks.

    opened by GriffinLiang 0
Releases(v1.0)
Provide partial dates and retain the date precision through processing

Prefix date parser This is a helper class to parse dates with varied degrees of precision. For example, a data source might state a date as 2001, 2001

Friedrich Lindenberg 13 Dec 14, 2022
RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP

[Paper] [Хабр] [Model Card] [Colab] [Kaggle] RuDOLPH 🦌 🎄 ☃️ One Hyper-Modal Tr

Sber AI 230 Dec 31, 2022
Repository for the paper "Exploring the Sensory Spaces of English Perceptual Verbs in Natural Language Data"

Sensory Spaces of English Perceptual Verbs This repository contains the code and collocational data described in the paper "Exploring the Sensory Spac

David Peng 0 Sep 07, 2021
GNNAdvisor: An Efficient Runtime System for GNN Acceleration on GPUs

GNNAdvisor: An Efficient Runtime System for GNN Acceleration on GPUs [Paper, Slides, Video Talk] at USENIX OSDI'21 @inproceedings{GNNAdvisor, title=

YUKE WANG 47 Jan 03, 2023
[NeurIPS 2021] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods

Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods Large Scale Learning on Non-Homophilous Graphs: New Benchmark

60 Jan 03, 2023
A little software to generate and save Julia or Mandelbrot's Fractals.

Julia-Mandelbrot-s-Fractals A little software to generate and save Julia or Mandelbrot's Fractals. Dependencies : Python 3.7 or more. (Also possible t

Olivier 0 Jul 09, 2022
End-to-end beat and downbeat tracking in the time domain.

WaveBeat End-to-end beat and downbeat tracking in the time domain. | Paper | Code | Video | Slides | Setup First clone the repo. git clone https://git

Christian J. Steinmetz 60 Dec 24, 2022
A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

4.9k Dec 31, 2022
C3DPO - Canonical 3D Pose Networks for Non-rigid Structure From Motion.

C3DPO: Canonical 3D Pose Networks for Non-Rigid Structure From Motion By: David Novotny, Nikhila Ravi, Benjamin Graham, Natalia Neverova, Andrea Vedal

Meta Research 309 Dec 16, 2022
Leaderboard and Visualization for RLCard

RLCard Showdown This is the GUI support for the RLCard project and DouZero project. RLCard-Showdown provides evaluation and visualization tools to hel

Data Analytics Lab at Texas A&M University 246 Dec 26, 2022
This is the source code of the solver used to compete in the International Timetabling Competition 2019.

ITC2019 Solver This is the source code of the solver used to compete in the International Timetabling Competition 2019. Building .NET Core (2.1 or hig

Edon Gashi 8 Jan 22, 2022
Human-Pose-and-Motion History

Human Pose and Motion Scientist Approach Eadweard Muybridge, The Galloping Horse Portfolio, 1887 Etienne-Jules Marey, Descent of Inclined Plane, Chron

Daito Manabe 47 Dec 16, 2022
The BCNet related data and inference model.

BCNet This repository includes the some source code and related dataset of paper BCNet: Learning Body and Cloth Shape from A Single Image, ECCV 2020,

81 Dec 12, 2022
This is the face keypoint train code of project face-detection-project

face-key-point-pytorch 1. Data structure The structure of landmarks_jpg is like below: |--landmarks_jpg |----AFW |------AFW_134212_1_0.jpg |------AFW_

I‘m X 3 Nov 27, 2022
A library that can print Python objects in human readable format

objprint A library that can print Python objects in human readable format Install pip install objprint Usage op Use op() (or objprint()) to print obj

319 Dec 25, 2022
The Python3 import playground

The Python3 import playground I have been confused about python modules and packages, this text tries to clear the topic up a bit. Sources: https://ch

Michael Moser 5 Feb 22, 2022
Adversarial Learning for Semi-supervised Semantic Segmentation, BMVC 2018

Adversarial Learning for Semi-supervised Semantic Segmentation This repo is the pytorch implementation of the following paper: Adversarial Learning fo

Wayne Hung 464 Dec 19, 2022
Mini Software that give reminder to drink water as per your weight.

Water Notification Desktop Python The Mini Software built in Python (tkinter) that will remind you to drink water on specific time span based on your

Om Jogani 5 Dec 16, 2022
Low Complexity Channel estimation with Neural Network Solutions

Interpolation-ResNet Invited paper for WSA 2021, called 'Low Complexity Channel estimation with Neural Network Solutions'. Low complexity residual con

Dianxin 10 Dec 10, 2022
LAVT: Language-Aware Vision Transformer for Referring Image Segmentation

LAVT: Language-Aware Vision Transformer for Referring Image Segmentation Where we are ? 12.27 目前和原论文仍有1%左右得差距,但已经力压很多SOTA了 ckpt__448_epoch_25.pth mIoU

zichengsaber 60 Dec 11, 2022