Spatial Contrastive Learning for Few-Shot Classification (SCL)

Overview

Spatial Contrastive Learning for Few-Shot Classification (SCL)

Paper 📃

This repo contains the official implementation of Spatial Contrastive Learning for Few-Shot Classification (SCL), which presents of a novel contrastive learning method applied to few-shot image classification in order to learn more general purpose embeddings, and facilitate the test-time adaptation to novel visual categories.

Highlights 🔥

(1) Contrastive Learning for Few-Shot Classification.
We explore contrastive learning as an auxiliary pre-training objective to learn more transferable features and facilitate the test time adaptation for few-shot classification.

(2) Spatial Contrastive Learning (SCL).
We propose a novel Spatial Contrastive (SC) loss that promotes the encoding of the relevant spatial information into the learned representations, and further promotes class-independent discriminative patterns.

(3) Contrastive Distillation for Few-Shot Classification.
We introduce a novel contrastive distillation objective to reduce the compactness of the features in the embedding space and provide additional refinement of the representations.

Requirements 🔧

This repo was tested with CentOS 7.7.1908, Python 3.7.7, PyTorch 1.6.0, and CUDA 10.2. However, we expect that the provided code is compatible with older and newer version alike.

The required packages are pytorch and torchvision, together with PIL and sckitlearn for data-preprocessing and evaluation, tqdm for showing the training progress, and some additional modules. To setup the necessary modules, simply run:

pip install -r requirements.txt

Datasets 💽

Standard Few-shot Setting

For the standard few-shot experiments, we used ImageNet derivatives: miniImagetNet and tieredImageNet, in addition to CIFAR-100 derivatives: FC100 and CIFAR-FS. These datasets are preprocessed by the repo of MetaOptNet, renamed and re-uploaded by RFS and can be downloaded from here: [DropBox]

After downloading all of the dataset, and placing them in the same folder which we refer to as DATA_PATH, where each dataset has its specific folder, eg: DATA_PATH/FC100. Then, during training, we can set the training argument data_root to DATA_PATH.

Cross-domain Few-shot Setting

In cross-domain setting, we train on miniImageNet but we test on a different dataset. Specifically, we consider 4 datasets: cub, cars, places and plantae. All of the datasets can be downloaded as follows:

cd dataset/download
python download.py DATASET_NAME DATA_PATH

where DATASET_NAME refers to one of the 4 datasets (cub, cars, places and plantae) and DATA_PATH refers to the path where the data will be downloaded and saved, which can be the path as the standard datasets above.

Running

All of the commands necessary to reproduce the results of the paper can be found in scripts/run.sh.

In general, to use the proposed method for few-shot classification, there is a two stage approach to follows: (1) training the model on the merged meta-training set using train_contrastive.py, then (2) an evaluation setting, where we evaluate the pre-trained embedding model on the meta-testing stage using eval_fewshot.py. Note that we can also apply an optional distillation step after the first pre-training step using train_distillation.py.

Other Use Cases

The proposed SCL method is not specific to few-shot classification, and can also be used for standard supervised or self-supervised training for image classification. For instance, this can be done as follows:

from losses import ContrastiveLoss
from models.attention import AttentionSimilarity

attention_module = AttentionSimilarity(hidden_size=128) # hidden_size depends on the encoder
contrast_criterion = ContrastiveLoss(temperature=10) # inverse temp is used (0.1)

....

# apply some augmentations
aug_inputs1, aug_inputs2 = augment(inputs) 
aug_inputs = torch.cat([aug_inputs1, aug_inputs2], dim=0)

# forward pass
features = encoder(aug_inputs)

# supervised case
loss_contrast = contrast_criterion(features, attention=attention_module, labels=labels)

# unsupervised case
loss_contrast = contrast_criterion(features, attention=attention_module, labels=None)

....

Citation 📝

If you find this repo useful for your research, please consider citing the paper as follows:

@article{ouali2020spatial,
  title={Spatial Contrastive Learning for Few-Shot Classification},
  author={Ouali, Yassine and Hudelot, C{\'e}line and Tami, Myriam},
  journal={arXiv preprint arXiv:2012.13831},
  year={2020}
}

For any questions, please contact Yassine Ouali.

Acknowlegements

  • The code structure is based on RFS repo.
  • The cross-domain datasets code is based on CrossDomainFewShot repo.
An implementation of the BADGE batch active learning algorithm.

Batch Active learning by Diverse Gradient Embeddings (BADGE) An implementation of the BADGE batch active learning algorithm. Details are provided in o

125 Dec 24, 2022
Cookiecutter PyTorch Lightning

Cookiecutter PyTorch Lightning Instructions # install cookiecutter pip install cookiecutter

Mazen 8 Nov 06, 2022
Learned model to estimate number of distinct values (NDV) of a population using a small sample.

Learned NDV estimator Learned model to estimate number of distinct values (NDV) of a population using a small sample. The model approximates the maxim

2 Nov 21, 2022
Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images

SASSnet Code for paper: Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images(MICCAI 2020) Our code is origin from UA-MT You can fin

klein 125 Jan 03, 2023
Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time.

BBB Face Recognizer Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time. Instalati

Rafael Azevedo 232 Dec 24, 2022
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
Code and models for "Pano3D: A Holistic Benchmark and a Solid Baseline for 360 Depth Estimation", OmniCV Workshop @ CVPR21.

Pano3D A Holistic Benchmark and a Solid Baseline for 360o Depth Estimation Pano3D is a new benchmark for depth estimation from spherical panoramas. We

Visual Computing Lab, Information Technologies Institute, Centre for Reseach and Technology Hellas 50 Dec 29, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
Repo 4 basic seminar §How to make human machine readable"

WORK IN PROGRESS... Notebooks from the Seminar: Human Machine Readable WS21/22 Introduction into programming Georg Trogemann, Christian Heck, Mattis

experimental-informatics 3 May 29, 2022
Progressive Domain Adaptation for Object Detection

Progressive Domain Adaptation for Object Detection Implementation of our paper Progressive Domain Adaptation for Object Detection, based on pytorch-fa

96 Nov 25, 2022
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 13 Nov 04, 2022
Stochastic Scene-Aware Motion Prediction

Stochastic Scene-Aware Motion Prediction [Project Page] [Paper] Description This repository contains the training code for MotionNet and GoalNet of SA

Mohamed Hassan 31 Dec 09, 2022
A memory-efficient implementation of DenseNets

efficient_densenet_pytorch A PyTorch =1.0 implementation of DenseNets, optimized to save GPU memory. Recent updates Now works on PyTorch 1.0! It uses

Geoff Pleiss 1.4k Dec 25, 2022
Collection of sports betting AI tools.

sports-betting sports-betting is a collection of tools that makes it easy to create machine learning models for sports betting and evaluate their perf

George Douzas 109 Dec 31, 2022
Detector for Log4Shell exploitation attempts

log4shell-detector Detector for Log4Shell exploitation attempts Idea The problem with the log4j CVE-2021-44228 exploitation is that the string can be

Florian Roth 729 Dec 25, 2022
2021 National Underwater Robotics Vision Optics

2021-National-Underwater-Robotics-Vision-Optics 2021年全国水下机器人算法大赛-光学赛道-B榜精度第18名 (Kilian_Di的团队:A榜[email pro

Di Chang 9 Nov 04, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
Optimizes image files by converting them to webp while also updating all references.

About Optimizes images by (re-)saving them as webp. For every file it replaced it automatically updates all references. Works on single files as well

Watermelon Wolverine 18 Dec 23, 2022
[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

SADRNet Paper link: SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction Requirements python

Multimedia Computing Group, Nanjing University 99 Dec 30, 2022
Deep Learning agent of Starcraft2, similar to AlphaStar of DeepMind except size of network.

Introduction This repository is for Deep Learning agent of Starcraft2. It is very similar to AlphaStar of DeepMind except size of network. I only test

Dohyeong Kim 136 Jan 04, 2023