[WACV 2022] Contextual Gradient Scaling for Few-Shot Learning

Related tags

Deep LearningCxGrad
Overview

CxGrad - Official PyTorch Implementation

Contextual Gradient Scaling for Few-Shot Learning
Sanghyuk Lee, Seunghyun Lee, and Byung Cheol Song
In WACV 2022. (Paper link will be provided soon)

This repository is an official PyTorch implementation for "Contextual Gradient Scaling for Few-Shot Learning" in WACV 2022.

Installation

This code is based on PyTorch. Please make a virtual environment and use it by running the command below:

conda env create --file environment.yml -n CxGrad
conda activate CxGrad

Datasets

We provide instructions to download 4 datasets: miniImageNet, tieredImageNet, CUB, and CIFAR-FS. Download the datasets you want to use and move them to datasets.

  1. miniImageNet: Download mini_imagenet_full_size.tar.bz2 from this link, provided in MAML++. Note that by downloading and using the miniImageNet, you accept terms and conditions found in imagenet_license.md.

  2. tieredImageNet: Download tiered_imagenet.tar from this link.

  3. CIFAR-FS: Download cifar100.zip from this link. The splits and the download link are provided by Bertinetto.

  4. CUB: Download CUB_200_2011.tgz from this link. The classes of each split are randomly chosen. Thus, we provide the splits of our experiments: CUB_split_train.txt, CUB_split_val.txt, and CUB_split_test.txt in datasets/preprocess. These splits are done by a script written by Chen.

Then, run the command below to preprocess the datasets you downloaded.

python preprocess/preprocess.py --datasets DATASET1 DATASET2 ...

The structure should be like this:

CxGrad 
  ├── datasets
  |      ├── miniImageNet
  |      |        ├── train
  |      |        ├── val
  |      |        └── test
  |      |── tieredImageNet
  |      |         ├── train
  |      |         ├── val
  |      |         └── test
  |      ├── CIFAR-FS
  |      |       ├── train
  |      |       ├── val
  |      |       └── test
  |      └── CUB
  |           ├── train
  |           ├── val
  |           └── test
  ├── utils
  ├── README.md
  └── ...

Run experiments

  • Change directory to experiment_scripts.

Train

  • In order to train the model on N-way K-shot miniImageNet classification, run
    bash mini_imagenet_Nway_Kshot/CxGrad_4conv.sh GPU_ID
    
  • Otherwise for tieredImageNet, run
     bash tiered_imagenet_Nway_Kshot/CxGrad_4conv.sh GPU_ID
    

Test

  • ex) Test on CUB using the model trained on 5-way 5-shot miniImageNet
     TEST=1 TEST_DATASET=CUB bash mini_imagenet_5way_5shot/CxGrad_4conv.sh GPU_ID
    

Citation

To be prepared

Acknowledgment

Thanks to the authors of MAML++ and ALFA, which our work is based on, for their great implementations.

Owner
Sanghyuk Lee
Sanghyuk Lee
A PaddlePaddle implementation of STGCN with a few modifications in the model architecture in order to forecast traffic jam.

About This repository contains the code of a PaddlePaddle implementation of STGCN based on the paper Spatio-Temporal Graph Convolutional Networks: A D

Tianjian Li 1 Jan 11, 2022
Project NII pytorch scripts

project-NII-pytorch-scripts By Xin Wang, National Institute of Informatics, since 2021 I am a new pytorch user. If you have any suggestions or questio

Yamagishi and Echizen Laboratories, National Institute of Informatics 184 Dec 23, 2022
Implementation of Self-supervised Graph-level Representation Learning with Local and Global Structure (ICML 2021).

Self-supervised Graph-level Representation Learning with Local and Global Structure Introduction This project is an implementation of ``Self-supervise

MilaGraph 50 Dec 09, 2022
Python package for visualizing the loss landscape of parameterized quantum algorithms.

orqviz A Python package for easily visualizing the loss landscape of Variational Quantum Algorithms by Zapata Computing Inc. orqviz provides a collect

Zapata Computing, Inc. 75 Dec 30, 2022
Code for the Convolutional Vision Transformer (ConViT)

ConViT : Vision Transformers with Convolutional Inductive Biases This repository contains PyTorch code for ConViT. It builds on code from the Data-Eff

Facebook Research 418 Jan 06, 2023
mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms. It provides easily interchangeable modeling and planning components, and a set of utility function

Facebook Research 724 Jan 04, 2023
a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LSTM layers

RNN-Playwrite a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LS

Arno Barton 1 Oct 29, 2021
Generalized hybrid model for mode-locked laser diodes with an extended passive cavity

GenHybridMLLmodel Generalized hybrid model for mode-locked laser diodes with an extended passive cavity This hybrid simulation strategy combines a tra

Stijn Cuyvers 3 Sep 21, 2022
Implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hashing by Maximizing Bit Entropy

Deep Unsupervised Image Hashing by Maximizing Bit Entropy This is the PyTorch implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hash

62 Dec 30, 2022
[ICML'21] Estimate the accuracy of the classifier in various environments through self-supervision

What Does Rotation Prediction Tell Us about Classifier Accuracy under Varying Testing Environments? [Paper] [ICML'21 Project] PyTorch Implementation T

24 Oct 26, 2022
Finite-temperature variational Monte Carlo calculation of uniform electron gas using neural canonical transformation.

CoulombGas This code implements the neural canonical transformation approach to the thermodynamic properties of uniform electron gas. Building on JAX,

FermiFlow 9 Mar 03, 2022
Caffe-like explicit model constructor. C(onfig)Model

cmodel Caffe-like explicit model constructor. C(onfig)Model Installation pip install git+https://github.com/bonlime/cmodel Usage In order to allow usi

1 Feb 18, 2022
Tensorflow Implementation of Pixel Transposed Convolutional Networks (PixelTCN and PixelTCL)

Pixel Transposed Convolutional Networks Created by Hongyang Gao, Hao Yuan, Zhengyang Wang and Shuiwang Ji at Texas A&M University. Introduction Pixel

Hongyang Gao 95 Jul 24, 2022
deep_image_prior_extension

Code for "Is Deep Image Prior in Need of a Good Education?" Project page: https://jleuschn.github.io/docs.educated_deep_image_prior/. Supplementary Ma

riccardo barbano 7 Jan 09, 2022
This repo provides the base code for pytorch-lightning and weight and biases simultaneous integration.

Write your model faster with pytorch-lightning-wadb-code-backbone This repository provides the base code for pytorch-lightning and weight and biases s

9 Mar 29, 2022
Pytorch implementation of the paper Time-series Generative Adversarial Networks

TimeGAN-pytorch Pytorch implementation of the paper Time-series Generative Adversarial Networks presented at NeurIPS'19. Jinsung Yoon, Daniel Jarrett

Zhiwei ZHANG 21 Nov 24, 2022
StyleGAN - Official TensorFlow Implementation

StyleGAN — Official TensorFlow Implementation Picture: These people are not real – they were produced by our generator that allows control over differ

NVIDIA Research Projects 13.1k Jan 09, 2023
A collection of resources, problems, explanations and concepts that are/were important during my Data Science journey

Data Science Gurukul List of resources, interview questions, concepts I use for my Data Science work. Topics: Basics of Programming with Python + Unde

Smaranjit Ghose 10 Oct 25, 2022
A modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (prediction model)

ParallelFold Author: Bozitao Zhong This is a modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (p

Bozitao Zhong 77 Dec 22, 2022
Implicit Graph Neural Networks

Implicit Graph Neural Networks This repository is the official PyTorch implementation of "Implicit Graph Neural Networks". Fangda Gu*, Heng Chang*, We

Heng Chang 48 Nov 29, 2022