Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples

Overview

Improved Few-Shot Visual Classification

This repository contains source codes for the following papers:

The code base has been authored by Peyman Bateni, Jarred Barber, Raghav Goyal, Vaden Masrani, Dr. Jan-Willemn van de Meent, Dr. Leonid Sigal and Dr. Frank Wood. The source codes build on the original code base for CNAPS authored by Dr. John Bronskill, Jonathan Gordon, James Reqeima, Dr. Sebastian Nowozin, and Dr. Richard E. Turner. We would like to thank them for their help, support and early sharing of their work. To see the original CNAPS repository, visit https://github.com/cambridge-mlg/cnaps.

Simple CNAPS

Simple CNAPS proposes the use of hierarchically regularized cluster means and covariance estimates within a Mahalanobis-distance based classifer for improved few-shot classification accuracy. This method incorporates said classifier within the same neural adaptive feature extractor as CNAPS. For more details, please refer to our paper on Simple CNAPS: Improved Few-Shot Visual Classification. The source code for this paper has been provided in the simple-cnaps-src directory. To reproduce our results, please refer to the README.md file within that folder.

Global Meta-Dataset Rank (Simple CNAPS): https://github.com/google-research/meta-dataset#training-on-all-datasets

Global Mini-ImageNet Rank (Simple CNAPS):

PWC PWC PWC PWC

Global Tiered-ImageNet Rank (Simple CNAPS):

PWC PWC PWC PWC

Transductive CNAPS

Transductive CNAPS extends the Simple CNAPS framework to the transductive few-shot learning setting where all query examples are provided at once. This method uses a two-step transductive task-encoder for adapting the feature extractor as well as a soft k-means cluster refinement procedure, resulting in better test-time accuracy. For additional details, please refer to our paper on Transductive CNAPS: Enhancing Few-Shot Image Classification with Unlabelled Examples. The source code for this work is provided under the transductive-cnaps-src directory. To reproduce our results, please refer to the README.md file within this folder.

Global Meta-Dataset Rank (Transductive CNAPS): https://github.com/google-research/meta-dataset#training-on-all-datasets

Global Mini-ImageNet Rank (Transductive CNAPS):

PWC PWC PWC PWC

Global Tiered-ImageNet Rank (Transductive CNAPS):

PWC PWC PWC PWC

Active and Continual Learning

We additionally evaluate both methods within the paradigms of "out of the box" active and continual learning. These settings were first proposed by Requeima et al., and studies how well few-shot classifiers, trained for few-shot learning, can be deployed for active and continual learning without any problem-specific finetuning or training. For additional details on our active and continual learning experiments and algorithms, please refer to our latest paper: Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning. For code and instructions to reproduce the experiments reported, please refer to the active-learning and continual-learning folders.

Meta-Dataset Results

| Dataset | Simple CNAPS | Simple CNAPS | Transductive CNAPS | Transductive CNAPS |

--shuffle_dataset False False True False True
In-Domain Datasets --- --- --- ---
ILSVRC 58.6±1.1 56.5±1.1 58.8±1.1 57.9±1.1
Omniglot 91.7±0.6 91.9±0.6 93.9±0.4 94.3±0.4
Aircraft 82.4±0.7 83.8±0.6 84.1±0.6 84.7±0.5
Birds 74.9±0.8 76.1±0.9 76.8±0.8 78.8±0.7
Textures 67.8±0.8 70.0±0.8 69.0±0.8 66.2±0.8
Quick Draw 77.7±0.7 78.3±0.7 78.6±0.7 77.9±0.6
Fungi 46.9±1.0 49.1±1.2 48.8±1.1 48.9±1.2
VGG Flower 90.7±0.5 91.3±0.6 91.6±0.4 92.3±0.4
Out-of-Domain Datasets --- --- --- ---
Traffic Signs 73.5±0.7 59.2±1.0 76.1±0.7 59.7±1.1
MSCOCO 46.2±1.1 42.4±1.1 48.7±1.0 42.5±1.1
MNIST 93.9±0.4 94.3±0.4 95.7±0.3 94.7±0.3
CIFAR10 74.3±0.7 72.0±0.8 75.7±0.7 73.6±0.7
CIFAR100 60.5±1.0 60.9±1.1 62.9±1.0 61.8±1.0
--- --- --- --- ---
In-Domain Average Accuracy 73.8±0.8 74.6±0.8 75.2±0.8 75.1±0.8
Out-of-Domain Average Accuracy 69.7±0.8 65.8±0.8 71.8±0.8 66.5±0.8
Overall Average Accuracy 72.2±0.8 71.2±0.8 73.9±0.8 71.8±0.8

Mini-ImageNet Results

Setup 5-way 1-shot 5-way 5-shot 10-way 1-shot 10-way 5-shot
Simple CNAPS 53.2±0.9 70.8±0.7 37.1±0.5 56.7±0.5
Transductive CNAPS 55.6±0.9 73.1±0.7 42.8±0.7 59.6±0.5
--- --- --- --- ---
Simple CNAPS + FETI 77.4±0.8 90.3±0.4 63.5±0.6 83.1±0.4
Transductive CNAPS + FETI 79.9±0.8 91.5±0.4 68.5±0.6 85.9±0.3

Tiered-ImageNet Results

Setup 5-way 1-shot 5-way 5-shot 10-way 1-shot 10-way 5-shot
Simple CNAPS 63.0±1.0 80.0±0.8 48.1±0.7 70.2±0.6
Transductive CNAPS 65.9±1.0 81.8±0.7 54.6±0.8 72.5±0.6
--- --- --- --- ---
Simple CNAPS + FETI 71.4±1.0 86.0±0.6 57.1±0.7 78.5±0.5
Transductive CNAPS + FETI 73.8±1.0 87.7±0.6 65.1±0.8 80.6±0.5

Citation

We hope you have found our code base helpful! If you use this repository, please cite our papers:

@InProceedings{Bateni2020_SimpleCNAPS,
    author = {Bateni, Peyman and Goyal, Raghav and Masrani, Vaden and Wood, Frank and Sigal, Leonid},
    title = {Improved Few-Shot Visual Classification},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2020}
}

@InProceedings{Bateni2022_TransductiveCNAPS,
    author    = {Bateni, Peyman and Barber, Jarred and van de Meent, Jan-Willem and Wood, Frank},
    title     = {Enhancing Few-Shot Image Classification With Unlabelled Examples},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2022},
    pages     = {2796-2805}
}

@misc{Bateni2022_BeyondSimpleMetaLearning,
    title={Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning}, 
    author={Peyman Bateni and Jarred Barber and Raghav Goyal and Vaden Masrani and Jan-Willem van de Meent and Leonid Sigal and Frank Wood},
    year={2022},
    eprint={2201.05151},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

**If you would like to ask any questions or reach out regarding any of the papers, please email me directly at [email protected] (my cs.ubc.ca email may have expired by the time you are emailing as I have graduated!).

Owner
PLAI Group at UBC
PLAI Group at UBC
Deeper insights into graph convolutional networks for semi-supervised learning

deeper_insights_into_GCNs Deeper insights into graph convolutional networks for semi-supervised learning References data and utils.py come from Implem

Davidham3 17 Dec 16, 2022
QR2Pass-project - A proof of concept for an alternative (passwordless) authentication system to a web server

QR2Pass This is a proof of concept for an alternative (passwordless) authenticat

4 Dec 09, 2022
Pretrained models for Jax/Haiku; MobileNet, ResNet, VGG, Xception.

Pre-trained image classification models for Jax/Haiku Jax/Haiku Applications are deep learning models that are made available alongside pre-trained we

Alper Baris CELIK 14 Dec 20, 2022
Benchmarking the robustness of Spatial-Temporal Models

Benchmarking the robustness of Spatial-Temporal Models This repositery contains the code for the paper Benchmarking the Robustness of Spatial-Temporal

Yi Chenyu Ian 15 Dec 16, 2022
Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch

Perceiver - Pytorch Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch Install $ pip install perceiver-pytorch Usage

Phil Wang 876 Dec 29, 2022
Adversarial Self-Defense for Cycle-Consistent GANs

Adversarial Self-Defense for Cycle-Consistent GANs This is the official implementation of the CycleGAN robust to self-adversarial attacks used in pape

Dina Bashkirova 10 Oct 10, 2022
High-Resolution Image Synthesis with Latent Diffusion Models

Latent Diffusion Models Requirements A suitable conda environment named ldm can be created and activated with: conda env create -f environment.yaml co

CompVis Heidelberg 5.6k Jan 04, 2023
Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning

structshot Code and data for paper "Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning", Yi Yang and Arz

ASAPP Research 47 Dec 27, 2022
Repo for FUZE project. I will also publish some Linux kernel LPE exploits for various real world kernel vulnerabilities here. the samples are uploaded for education purposes for red and blue teams.

Linux_kernel_exploits Some Linux kernel exploits for various real world kernel vulnerabilities here. More exploits are yet to come. This repo contains

Wei Wu 472 Dec 21, 2022
SCNet: Learning Semantic Correspondence

SCNet Code Region matching code is contributed by Kai Han ([email protected]). Dense

Kai Han 34 Sep 06, 2022
UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems

[ICLR 2021] "UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems" by Jiayi Shen, Haotao Wang*, Shupeng Gui*, Jianchao Tan, Zhangyang Wang, and Ji Liu

VITA 39 Dec 03, 2022
Algorithmic encoding of protected characteristics and its implications on disparities across subgroups

Algorithmic encoding of protected characteristics and its implications on disparities across subgroups This repository contains the code for the paper

Team MIRA - BioMedIA 15 Oct 24, 2022
GRaNDPapA: Generator of Rad Names from Decent Paper Acronyms

GRaNDPapA: Generator of Rad Names from Decent Paper Acronyms Trying to publish a new machine learning model and can't write a decent title for your pa

264 Nov 08, 2022
Pytorch Implementation for Dilated Continuous Random Field

DilatedCRF Pytorch implementation for fully-learnable DilatedCRF. If you find my work helpful, please consider our paper: @article{Mo2022dilatedcrf,

DunnoCoding_Plus 3 Nov 13, 2022
Learning Dense Representations of Phrases at Scale (Lee et al., 2020)

DensePhrases DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time. While it efficiently searches th

Princeton Natural Language Processing 540 Dec 30, 2022
Everything you need to know about NumPy( Creating Arrays, Indexing, Math,Statistics,Reshaping).

Everything you need to know about NumPy( Creating Arrays, Indexing, Math,Statistics,Reshaping).

1 Feb 14, 2022
Simulating an AI playing 2048 using the Expectimax algorithm

2048-expectimax Simulating an AI playing 2048 using the Expectimax algorithm The base game engine uses code from here. The AI player is modeled as a m

Subha Ramesh 2 Jan 31, 2022
Python implementation of NARS (Non-Axiomatic-Reasoning-System)

Python implementation of NARS (Non-Axiomatic-Reasoning-System)

Bowen XU 11 Dec 20, 2022
Official implementation of particle-based models (GNS and DPI-Net) on the Physion dataset.

Physion: Evaluating Physical Prediction from Vision in Humans and Machines [paper] Daniel M. Bear, Elias Wang, Damian Mrowca, Felix J. Binder, Hsiao-Y

Hsiao-Yu Fish Tung 18 Dec 19, 2022
A GridMixup augmentation, inspired by GridMask and CutMix

GridMixup A GridMixup augmentation, inspired by GridMask and CutMix Easy install pip install git+https://github.com/IlyaDobrynin/GridMixup.git Overvie

IlyaDo 42 Dec 28, 2022