PyTorch implementation of our paper How robust are discriminatively trained zero-shot learning models?

Overview

How robust are discriminatively trained zero-shot learning models?

This repository contains the PyTorch implementation of our paper How robust are discriminatively trained zero-shot learning models? published at Elsevier Image and Vision Computing.

Paper Highlights

In this paper, as a continuation of our previous work, we focus on the corruption robustness of discriminative ZSL models. Highlights of our paper is as follows.

  1. In order to facilitate the corruption robustness analyses, we curate and release the first benchmark datasets CUB-C, SUN-C and AWA2-C.
  2. We show that, compared to fully supervised settings, class imbalance and model strength are severe issues effecting the robustness behaviour of ZSL models.
  3. Combined with our previous work, we define and show the pseudo robustness effect, where absolute metrics may not always reflect the robustness behaviour of a model. This effect is present for adversarial examples, but not for corruptions.
  4. We show that recent augmentation methods designed for better corruption robustness can also increase the clean accuracy of ZSL models, and set new strong baselines.
  5. We show in detail that unseen and seen classes are affected disproportionately. We also show zero-shot and generalized zero-shot performances are affected differently.

Dataset Highlights

We release CUB-C, SUN-C and AWA2-C, which are corrupted versions of three popular ZSL benchmarks. Based on the previous work, we introduce several corruptions in various severities to test the generalization ability of ZSL models. More details on the design process and corruptions can be found in the paper.

Repository Contents and Requirements

This repository contains the code to reproduce our results and the necessary scripts to generate the corruption datasets. You should follow the below steps before running the code.

  • You can use the provided environment yml (or pip requirements.txt) file to install dependencies.
  • Download the pretrained models here and place them under /model folders.
  • Download AWA2, SUN and CUB datasets. Please note we operate on raw images, not the features provided with the datasets.
  • Download the data split/attribute files here and extract the contents into /data folder.
  • Change the necessary paths in the json file.

โ— The code in this repository lets you evaluate our provided models with AWA2, CUB-C and SUN-C. If you want to use corruption datasets, you can take generate_corruption.py file and use it in your own project.

โ— Additional Content โ—

In addition to the paper, we release our supplementary file supp.pdf. It includes the following.

1. Average errors (ZSL and GZSL) for each dataset per corruption category. These are for the ALE model, and should be used to weight the errors when calculating mean corruption errors. For comparison, this essentially replaces AlexNet error weighting used for ImageNet-C dataset.

2. Mean corruption errors (ZSL and GZSL) of the ALE model, for seen/unseen/harmonic and ZSL top-1 accuracies, on each dataset. These results include the MCE values for original ALE and ALE with five defense methods used in our paper (i.e. total-variance minimization, spatial smoothing, label smoothing, AugMix and ANT). These values can be used as baseline scores when comparing the robustness of your method.

Running the code

After you've downloaded the necessary dataset files, you can run the code by simply

python run.py

For changing the experimental parameters, refer to params.json file. Details on json file parameters can be found in the code. By default, running run.py looks for a params.json file in the folder. If you want to run the code with another json file, use

python run.py --json_path path_to_json

Citation

If you find our code or paper useful in your research, please consider citing the following papers.

@inproceedings{yucel2020eccvw,
  title={A Deep Dive into Adversarial Robustness in Zero-Shot Learning},
  author={Yucel, Mehmet Kerim and Cinbis, Ramazan Gokberk and Duygulu, Pinar},
  booktitle = {ECCV Workshop on Adversarial Robustness in the Real World}
  pages={3--21},
  year={2020},
  organization={Springer}
}

@article{yucel2022imavis,
title = {How robust are discriminatively trained zero-shot learning models?},
journal = {Image and Vision Computing},
pages = {104392},
year = {2022},
issn = {0262-8856},
doi = {https://doi.org/10.1016/j.imavis.2022.104392},
url = {https://www.sciencedirect.com/science/article/pii/S026288562200021X},
author = {Mehmet Kerim Yucel and Ramazan Gokberk Cinbis and Pinar Duygulu},
keywords = {Zero-shot learning, Robust generalization, Adversarial robustness},
}

Acknowledgements

This code base has borrowed several implementations from here, here and it is a continuation of our previous work's repository.

Owner
Mehmet Kerim Yucel
Mehmet Kerim Yucel
Annotate datasets with a semi-trained or fully trained YOLOv5 model

YOLOv5 Auto Annotator Annotate datasets with a semi-trained or fully trained YOLOv5 model Prerequisites Ubuntu =20.04 Python =3.7 System dependencie

Akash James 3 May 14, 2022
Comp445 project - Data Communications & Computer Networks

COMP-445 Data Communications & Computer Networks Change Python version in Conda

Peng Zhao 2 Oct 03, 2022
Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

StackGAN-v2 StackGAN-v1: Tensorflow implementation StackGAN-v1: Pytorch implementation Inception score evaluation Pytorch implementation for reproduci

Han Zhang 809 Dec 16, 2022
[ICRA2021] Reconstructing Interactive 3D Scene by Panoptic Mapping and CAD Model Alignment

Interactive Scene Reconstruction Project Page | Paper This repository contains the implementation of our ICRA2021 paper Reconstructing Interactive 3D

97 Dec 28, 2022
Joint Unsupervised Learning (JULE) of Deep Representations and Image Clusters.

Joint Unsupervised Learning (JULE) of Deep Representations and Image Clusters. Overview This project is a Torch implementation for our CVPR 2016 paper

Jianwei Yang 278 Dec 25, 2022
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting This is the origin Pytorch implementation of Informer in the followin

Haoyi 3.1k Dec 29, 2022
PyTorch wrapper for Taichi data-oriented class

Stannum PyTorch wrapper for Taichi data-oriented class PRs are welcomed, please see TODOs. Usage from stannum import Tin import torch data_oriented =

86 Dec 23, 2022
for a paper about leveraging discourse markers for training new models

TSLM-DISCOURSE-MARKERS Scope This repository contains: (1) Code to extract discourse markers from wikipedia (TSA). (1) Code to extract significant dis

International Business Machines 6 Nov 02, 2022
ChatBot-Pytorch - A GPT-2 ChatBot implemented using Pytorch and Huggingface-transformers

ChatBot-Pytorch A GPT-2 ChatBot implemented using Pytorch and Huggingface-transf

ParZival 42 Dec 09, 2022
SAMO: Streaming Architecture Mapping Optimisation

SAMO: Streaming Architecture Mapping Optimiser The SAMO framework provides a method of optimising the mapping of a Convolutional Neural Network model

Alexander Montgomerie-Corcoran 20 Dec 10, 2022
ARAE-Tensorflow for Discrete Sequences (Adversarially Regularized Autoencoder)

ARAE Tensorflow Code Code for the paper Adversarially Regularized Autoencoders for Generating Discrete Structures by Zhao, Kim, Zhang, Rush and LeCun

19 Nov 12, 2021
Multi-Output Gaussian Process Toolkit

Multi-Output Gaussian Process Toolkit Paper - API Documentation - Tutorials & Examples The Multi-Output Gaussian Process Toolkit is a Python toolkit f

GAMES 113 Nov 25, 2022
๐Ÿฅ‡ LG-AI-Challenge 2022 1์œ„ ์†”๋ฃจ์…˜ ์ž…๋‹ˆ๋‹ค.

LG-AI-Challenge-for-Plant-Classification Dacon์—์„œ ์ง„ํ–‰๋œ ๋†์—… ํ™˜๊ฒฝ ๋ณ€ํ™”์— ๋”ฐ๋ฅธ ์ž‘๋ฌผ ๋ณ‘ํ•ด ์ง„๋‹จ AI ๊ฒฝ์ง„๋Œ€ํšŒ ์— ๋Œ€ํ•œ ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค. (colab directory์— ์ฝ”๋“œ๊ฐ€ ์ž˜ ์ •๋ฆฌ ๋˜์–ด์žˆ์Šต๋‹ˆ๋‹ค.) Requirements python

siwooyong 10 Jun 30, 2022
Full Resolution Residual Networks for Semantic Image Segmentation

Full-Resolution Residual Networks (FRRN) This repository contains code to train and qualitatively evaluate Full-Resolution Residual Networks (FRRNs) a

Toby Pohlen 274 Oct 27, 2022
Code for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling Using BERT Adapter"

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter Code and checkpoints for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling

274 Dec 06, 2022
LyaNet: A Lyapunov Framework for Training Neural ODEs

LyaNet: A Lyapunov Framework for Training Neural ODEs Provide the model type--config-name to train and test models configured as those shown in the pa

Ivan Dario Jimenez Rodriguez 21 Nov 21, 2022
Image segmentation with private ฤฐstanbul Dataset

Image Segmentation This repo was created for academic research and test result. Repo will update after academic article online. This repo contains wei

ฤฐrem Kร–MรœRCรœ 9 Dec 11, 2022
The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding.

SuperGen The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding. Requirements Before running, you

Yu Meng 38 Dec 12, 2022
ByteTrack with ReID module following the paradigm of FairMOT, tracking strategy is borrowed from FairMOT/JDE.

ByteTrack_ReID ByteTrack is the SOTA tracker in MOT benchmarks with strong detector YOLOX and a simple association strategy only based on motion infor

Han GuangXin 46 Dec 29, 2022
The code repository for "PyCIL: A Python Toolbox for Class-Incremental Learning" in PyTorch.

PyCIL: A Python Toolbox for Class-Incremental Learning Introduction โ€ข Methods Reproduced โ€ข Reproduced Results โ€ข How To Use โ€ข License โ€ข Acknowledgement

Fu-Yun Wang 258 Dec 31, 2022