[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Overview

PWC PWC

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021)

[arXiv][Project page >> coming soon]

Sanath Narayan*, Akshita Gupta*, Salman Khan, Fahad Shahbaz Khan, Ling Shao, Mubarak Shah

( 🌟 denotes equal contribution)

Installation

The codebase is built on PyTorch 1.1.0 and tested on Ubuntu 16.04 environment (Python3.6, CUDA9.0, cuDNN7.5).

For installing, follow these intructions

conda create -n mlzsl python=3.6
conda activate mlzsl
conda install pytorch=1.1 torchvision=0.3 cudatoolkit=9.0 -c pytorch
pip install matplotlib scikit-image scikit-learn opencv-python yacs joblib natsort h5py tqdm pandas

Install warmup scheduler

cd pytorch-gradual-warmup-lr; python setup.py install; cd ..

Attention Visualization

Results

Our approach on NUS-WIDE Dataset.

Our approach on OpenImages Dataset.

Training and Evaluation

NUS-WIDE

Step 1: Data preparation

  1. Download pre-computed features from here and store them at features folder inside BiAM/datasets/NUS-WIDE directory.
  2. [Optional] You can extract the features on your own by using the original NUS-WIDE dataset from here and run the below script:
python feature_extraction/extract_nus_wide.py

Step 2: Training from scratch

To train and evaluate multi-label zero-shot learning model on full NUS-WIDE dataset, please run:

sh scripts/train_nus.sh

Step 3: Evaluation using pretrained weights

To evaluate the multi-label zero-shot model on NUS-WIDE. You can download the pretrained weights from here and store them at NUS-WIDE folder inside pretrained_weights directory.

sh scripts/evaluate_nus.sh

OPEN-IMAGES

Step 1: Data preparation

  1. Please download the annotations for training, validation, and testing into this folder.

  2. Store the annotations inside BiAM/datasets/OpenImages.

  3. To extract the features for OpenImages-v4 dataset run the below scripts for crawling the images and extracting features of them:

## Crawl the images from web
python ./datasets/OpenImages/download_imgs.py  #`data_set` == `train`: download images into `./image_data/train/`
python ./datasets/OpenImages/download_imgs.py  #`data_set` == `validation`: download images into `./image_data/validation/`
python ./datasets/OpenImages/download_imgs.py  #`data_set` == `test`: download images into `./image_data/test/`

## Run feature extraction codes for all the 3 splits
python feature_extraction/extract_openimages_train.py
python feature_extraction/extract_openimages_test.py
python feature_extraction/extract_openimages_val.py

Step 2: Training from scratch

To train and evaluate multi-label zero-shot learning model on full OpenImages-v4 dataset, please run:

sh scripts/train_openimages.sh
sh scripts/evaluate_openimages.sh

Step 3: Evaluation using pretrained weights

To evaluate the multi-label zero-shot model on OpenImages. You can download the pretrained weights from here and store them at OPENIMAGES folder inside pretrained_weights directory.

sh scripts/evaluate_openimages.sh

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.

Citation

If you find this repository useful, please consider giving a star ⭐ and citation 🎊 :

@article{narayan2021discriminative,
title={Discriminative Region-based Multi-Label Zero-Shot Learning},
author={Narayan, Sanath and Gupta, Akshita and Khan, Salman and  Khan, Fahad Shahbaz and Shao, Ling and Shah, Mubarak},
journal={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
publisher = {IEEE},
year={2021}
}

Contact

Should you have any question, please contact πŸ“§ [email protected]

Owner
Akshita Gupta
Sem @IITR | Outreachy @mozilla | Research Engineer @IIAI
Akshita Gupta
BMN: Boundary-Matching Network

BMN: Boundary-Matching Network A pytorch-version implementation codes of paper: "BMN: Boundary-Matching Network for Temporal Action Proposal Generatio

qinxin 260 Dec 06, 2022
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation

πŸ“– Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022) πŸ”₯ If DaGAN is helpful in your photos/projects, please hel

Fa-Ting Hong 503 Jan 04, 2023
A python implementation of Yolov5 to detect fire or smoke in the wild in Jetson Xavier nx and Jetson nano

yolov5-fire-smoke-detect-python A python implementation of Yolov5 to detect fire or smoke in the wild in Jetson Xavier nx and Jetson nano You can see

20 Dec 15, 2022
Deep Implicit Moving Least-Squares Functions for 3D Reconstruction

DeepMLS: Deep Implicit Moving Least-Squares Functions for 3D Reconstruction This repository contains the implementation of the paper: Deep Implicit Mo

103 Dec 22, 2022
Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation'

OD-Rec Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation' Paper, saved teacher models and Andro

Xin Xia 11 Nov 22, 2022
Code for Efficient Visual Pretraining with Contrastive Detection

Code for DetCon This repository contains code for the ICCV 2021 paper "Efficient Visual Pretraining with Contrastive Detection" by Olivier J. HΓ©naff,

DeepMind 56 Nov 13, 2022
Package for working with hypernetworks in PyTorch.

Package for working with hypernetworks in PyTorch.

Christian Henning 71 Jan 05, 2023
Campsite Reservation Finder

yellowstone-camping UPDATE: yellowstone-camping is being expanded and renamed to camply. The updated tool now interfaces with the Recreation.gov API a

Justin Flannery 233 Jan 08, 2023
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 13 Nov 04, 2022
Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph".

multilingual-mrc-isdg Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph". This r

Liyan 5 Dec 07, 2022
[제 13회 νˆ¬λΉ…μŠ€ 컨퍼런슀] OK Mugle! - μž₯λ₯΄λΆ€ν„° λ©œλ‘œλ””κΉŒμ§€, Content-based Music Recommendation

Ok Mugle! 🎡 μž₯λ₯΄λΆ€ν„° λ©œλ‘œλ””κΉŒμ§€, Content-based Music Recommendation 'Ok Mugle!'은 제13회 νˆ¬λΉ…μŠ€ 컨퍼런슀(2022.01.15)μ—μ„œ μ§„ν–‰ν•œ μŒμ•… μΆ”μ²œ ν”„λ‘œμ νŠΈμž…λ‹ˆλ‹€. Description πŸ“– λ³Έ ν”„λ‘œμ νŠΈμ—μ„œλŠ” Kakao

SeongBeomLEE 5 Oct 09, 2022
This repository provides a basic implementation of our GCPR 2021 paper "Learning Conditional Invariance through Cycle Consistency"

Learning Conditional Invariance through Cycle Consistency This repository provides a basic TensorFlow 1 implementation of the proposed model in our GC

BMDA - University of Basel 1 Nov 04, 2022
Deep Learning and Reinforcement Learning Library for Scientists and Engineers πŸ”₯

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extens

TensorLayer Community 7.1k Dec 27, 2022
[CVPR 2021] Unsupervised 3D Shape Completion through GAN Inversion

ShapeInversion Paper Junzhe Zhang, Xinyi Chen, Zhongang Cai, Liang Pan, Haiyu Zhao, Shuai Yi, Chai Kiat Yeo, Bo Dai, Chen Change Loy "Unsupervised 3D

100 Dec 22, 2022
FANet - Real-time Semantic Segmentation with Fast Attention

FANet Real-time Semantic Segmentation with Fast Attention Ping Hu, Federico Perazzi, Fabian Caba Heilbron, Oliver Wang, Zhe Lin, Kate Saenko , Stan Sc

Ping Hu 42 Nov 30, 2022
I-SECRET: Importance-guided fundus image enhancement via semi-supervised contrastive constraining

I-SECRET This is the implementation of the MICCAI 2021 Paper "I-SECRET: Importance-guided fundus image enhancement via semi-supervised contrastive con

13 Dec 02, 2022
Reproduction of Vision Transformer in Tensorflow2. Train from scratch and Finetune.

Vision Transformer(ViT) in Tensorflow2 Tensorflow2 implementation of the Vision Transformer(ViT). This repository is for An image is worth 16x16 words

sungjun lee 42 Dec 27, 2022
[IEEE TPAMI21] MobileSal: Extremely Efficient RGB-D Salient Object Detection [PyTorch & Jittor]

MobileSal IEEE TPAMI 2021: MobileSal: Extremely Efficient RGB-D Salient Object Detection This repository contains full training & testing code, and pr

Yu-Huan Wu 52 Jan 06, 2023
FCOS: Fully Convolutional One-Stage Object Detection (ICCV'19)

FCOS: Fully Convolutional One-Stage Object Detection This project hosts the code for implementing the FCOS algorithm for object detection, as presente

Tian Zhi 3.1k Jan 05, 2023