[WACV21] Code for our paper: Samuel, Atzmon and Chechik, "From Generalized zero-shot learning to long-tail with class descriptors"

Related tags

Deep LearningDRAGON
Overview

DRAGON: From Generalized zero-shot learning to long-tail with class descriptors

Paper
Project Website
Video

Overview

DRAGON learns to correct the bias towards head classes on a sample-by-sample basis; and fuse information from class-descriptions to improve the tail-class accuracy, as described in our paper: Samuel, Atzmon and Chechik, "From Generalized zero-shot learning to long-tail with class descriptors".

Requirements

  • numpy 1.15.4
  • pandas 0.25.3
  • scipy 1.1.0
  • tensorflow 1.14.0
  • keras 2.2.5

Quick installation under Anaconda:

conda env create -f requirements.yml

Data Preparation

Datasets: CUB, SUN and AWA.
Download data.tar from here, untar it and place it under the project root directory.

DRAGON
| data
   |--CUB
   |--SUN
   |--AWA1
| attribute_expert
| dataset_handler
| fusion
...

Train Experts and Fusion Module

Reproduce results for DRAGON and its modules (Table 1 in our paper):
Training and evaluation should be according to the training protocol described in our paper (Section 5 - training):

  1. First, train each expert without the hold-out set (partial training set) by executing the following commands:

    • CUB:
      # Visual-Expert training
      PYTHONPATH="./" python visual_expert/main.py --base_train_dir=./checkpoints/CUB --dataset_name=CUB --transfer_task=DRAGON --train_dist=dragon --data_dir=data --batch_size=64 --max_epochs=100 --initial_learning_rate=0.0003 --l2=0.005
      # Attribute-Expert training 
      PYTHONPATH="./" python attribute_expert/main.py --base_train_dir=./checkpoints/CUB --dataset_name=CUB --transfer_task=DRAGON --data_dir=data --train_dist=dragon --batch_size=64 --max_epochs=100 --initial_learning_rate=0.001 --LG_beta=1e-7 --LG_lambda=0.0001 --SG_gain=3 --SG_psi=0.01 --SG_num_K=-1
      
    • SUN:
      # Visual-Expert training
      PYTHONPATH="./" python visual_expert/main.py --base_train_dir=./checkpoints/SUN --dataset_name=SUN --transfer_task=DRAGON --train_dist=dragon --data_dir=data --batch_size=64 --max_epochs=100 --initial_learning_rate=0.0001 --l2=0.01
      # Attribute-Expert training 
      PYTHONPATH="./" python attribute_expert/main.py --base_train_dir=./checkpoints/SUN --dataset_name=SUN --transfer_task=DRAGON --data_dir=data --train_dist=dragon --batch_size=64 --max_epochs=100 --initial_learning_rate=0.001 --LG_beta=1e-6 --LG_lambda=0.001 --SG_gain=10 --SG_psi=0.01 --SG_num_K=-1
      
    • AWA:
      # Visual-Expert training
      PYTHONPATH="./" python visual_expert/main.py --base_train_dir=./checkpoints/AWA1 --dataset_name=AWA1 --transfer_task=DRAGON --train_dist=dragon --data_dir=data --batch_size=64 --max_epochs=100 --initial_learning_rate=0.0003 --l2=0.1
      # Attribute-Expert training 
      PYTHONPATH="./" python attribute_expert/main.py --base_train_dir=./checkpoints/AWA1 --dataset_name=AWA1 --transfer_task=DRAGON --data_dir=data --train_dist=dragon --batch_size=64 --max_epochs=100 --initial_learning_rate=0.001 --LG_beta=0.001 --LG_lambda=0.001 --SG_gain=1 --SG_psi=0.01 --SG_num_K=-1
      
  2. Then, re-train each expert, with the hold-out set (full train set) by executing above commands with the --test_mode flag as a parameter.

  3. Rename Visual-lr=0.0003_l2=0.005 to Visual and LAGO-lr=0.001_beta=1e-07_lambda=0.0001_gain=3.0_psi=0.01 to LAGO (this is essential since the FusionModule finds trained experts by their names, without extensions).

  4. Train the fusion-module on partially trained experts (models from step 1) by running the following commands:

    • CUB:
      PYTHONPATH="./" python fusion/main.py --base_train_dir=./checkpoints/CUB --dataset_name=CUB --data_dir=data --initial_learning_rate=0.005 --batch_size=64 --max_epochs=50 --sort_preds=1 --freeze_experts=1 --nparams=2
      
    • SUN:
      PYTHONPATH="./" python fusion/main.py --base_train_dir=./checkpoints/SUN --dataset_name=SUN --data_dir=data --initial_learning_rate=0.0005 --batch_size=64 --max_epochs=50 --sort_preds=1 --freeze_experts=1 --nparams=4
      
    • AWA:
      PYTHONPATH="./" python fusion/main.py --base_train_dir=./checkpoints/AWA1 --dataset_name=AWA1 --data_dir=data --initial_learning_rate=0.005 --batch_size=64 --max_epochs=50 --sort_preds=1 --freeze_experts=1 --nparams=4
      
  5. Finally, evaluate the fusion-module with fully-trained experts (models from step 2), by executing step 4 commands with the --test_mode flag as a parameter.

Pre-trained Models and Checkpoints

Download checkpoints.tar from here, untar it and place it under the project root directory.

checkpoints
  |--CUB
      |--Visual
      |--LAGO
      |--Dual2ParametricRescale-lr=0.005_freeze=1_sort=1_topk=-1_f=2_s=(2, 2)
  |--SUN
      |--Visual
      |--LAGO
      |--Dual4ParametricRescale-lr=0.0005_freeze=1_sort=1_topk=-1_f=2_s=(2, 2)
  |--AWA1
      |--Visual
      |--LAGO
      |--Dual4ParametricRescale-lr=0.005_freeze=1_sort=1_topk=-1_f=2_s=(2, 2)

Cite Our Paper

If you find our paper and repo useful, please cite:

@InProceedings{samuel2020longtail,
  author    = {Samuel, Dvir and Atzmon, Yuval and Chechik, Gal},
  title     = {From Generalized Zero-Shot Learning to Long-Tail With Class Descriptors},
  booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
  year      = {2021}}
Owner
Dvir Samuel
Dvir Samuel
Unadversarial Examples: Designing Objects for Robust Vision

Unadversarial Examples: Designing Objects for Robust Vision This repository contains the code necessary to replicate the major results of our paper: U

Microsoft 93 Nov 28, 2022
Enabling dynamic analysis of Legacy Embedded Systems in full emulated environment

PENecro This project is based on "Enabling dynamic analysis of Legacy Embedded Systems in full emulated environment", published on hardwear.io USA 202

Ta-Lun Yen 10 May 17, 2022
StarGAN - Official PyTorch Implementation (CVPR 2018)

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

Yunjey Choi 5.1k Dec 30, 2022
NAACL'2021: Factual Probing Is [MASK]: Learning vs. Learning to Recall

OptiPrompt This is the PyTorch implementation of the paper Factual Probing Is [MASK]: Learning vs. Learning to Recall. We propose OptiPrompt, a simple

Princeton Natural Language Processing 150 Dec 20, 2022
Source codes for the paper "Local Additivity Based Data Augmentation for Semi-supervised NER"

LADA This repo contains codes for the following paper: Jiaao Chen*, Zhenghui Wang*, Ran Tian, Zichao Yang, Diyi Yang: Local Additivity Based Data Augm

GT-SALT 36 Dec 02, 2022
SMPL-X: A new joint 3D model of the human body, face and hands together

SMPL-X: A new joint 3D model of the human body, face and hands together [Paper Page] [Paper] [Supp. Mat.] Table of Contents License Description News I

Vassilis Choutas 1k Jan 09, 2023
Implementation of parameterized soft-exponential activation function.

Soft-Exponential-Activation-Function: Implementation of parameterized soft-exponential activation function. In this implementation, the parameters are

Shuvrajeet Das 1 Feb 23, 2022
Position detection system of mobile robot in the warehouse enviroment

Autonomous-Forklift-System About | GUI | Tests | Starting | License | Author | 🎯 About An application that run the autonomous forklift paletization a

Kamil Goś 1 Nov 24, 2021
A parallel framework for population-based multi-agent reinforcement learning.

MALib: A parallel framework for population-based multi-agent reinforcement learning MALib is a parallel framework of population-based learning nested

MARL @ SJTU 348 Jan 08, 2023
A scanpy extension to analyse single-cell TCR and BCR data.

Scirpy: A Scanpy extension for analyzing single-cell immune-cell receptor sequencing data Scirpy is a scalable python-toolkit to analyse T cell recept

ICBI 145 Jan 03, 2023
Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper

Ponder(ing) Transformer Implementation of a Transformer that learns to adapt the number of computational steps it takes depending on the difficulty of

Phil Wang 65 Oct 04, 2022
Template repository to build PyTorch projects from source on any version of PyTorch/CUDA/cuDNN.

The Ultimate PyTorch Source-Build Template Translations: 한국어 TL;DR PyTorch built from source can be x4 faster than a naïve PyTorch install. This repos

Joonhyung Lee/이준형 651 Dec 12, 2022
A series of Jupyter notebooks with Chinese comment that walk you through the fundamentals of Machine Learning and Deep Learning in python using Scikit-Learn and TensorFlow.

Hands-on-Machine-Learning 目的 这份笔记旨在帮助中文学习者以一种较快较系统的方式入门机器学习, 是在学习Hands-on Machine Learning with Scikit-Learn and TensorFlow这本书的 时候做的个人笔记: 此项目的可取之处 原书的

Baymax 1.5k Dec 21, 2022
A graph-to-sequence model for one-step retrosynthesis and reaction outcome prediction.

Graph2SMILES A graph-to-sequence model for one-step retrosynthesis and reaction outcome prediction. 1. Environmental setup System requirements Ubuntu:

29 Nov 18, 2022
A dual benchmarking study of visual forgery and visual forensics techniques

A dual benchmarking study of facial forgery and facial forensics In recent years, visual forgery has reached a level of sophistication that humans can

8 Jul 06, 2022
Sound and Cost-effective Fuzzing of Stripped Binaries by Incremental and Stochastic Rewriting

StochFuzz: A New Solution for Binary-only Fuzzing StochFuzz is a (probabilistically) sound and cost-effective fuzzing technique for stripped binaries.

Zhuo Zhang 164 Dec 05, 2022
Methods to get the probability of a changepoint in a time series.

Bayesian Changepoint Detection Methods to get the probability of a changepoint in a time series. Both online and offline methods are available. Read t

Johannes Kulick 554 Dec 30, 2022
PyTorch implementation of Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets

Simple PyTorch Implementation of "Grokking" Implementation of Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets Usage Running

Teddy Koker 15 Sep 29, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

144 Dec 30, 2022
PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines. We've created a system in which you can easily select and

Medical Machine Learning Lab - University of Münster 57 Nov 12, 2022