Implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning"

Related tags

Deep LearningSPPR
Overview

Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning

This is the implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning" (accepted to CVPR2021).

For more information, check out the paper on [arXiv].

Requirements

  • Python 3.8
  • PyTorch 1.8.1 (>1.1.0)
  • cuda 11.2

Preparing Few-Shot Class-Incremental Learning Datasets

Download following datasets:

1. CIFAR-100

Automatically downloaded on torchvision.

2. MiniImageNet

(1) Download MiniImageNet train/test images[github], and prepare related datasets according to [TOPIC].

(2) or Download processed data from our Google Drive: [mini-imagenet.zip], (and locate the entire folder under datasets/ directory).

3. CUB200

(1) Download CUB200 train/test images, and prepare related datasets according to [TOPIC]:

wget http://www.vision.caltech.edu/visipedia-data/CUB-200-2011/CUB_200_2011.tgz

(2) or Download processed data from our Google Drive: [cub.zip], (and locate the entire folder under datasets/ directory).

Create a directory '../datasets' for the above three datasets and appropriately place each dataset to have following directory structure:

../                                                        # parent directory
├── ./                                           # current (project) directory
│   ├── log/                              # (dir.) running log
│   ├── pre/                              # (dir.) trained models for test.
│   ├── utils/                            # (dir.) implementation of paper 
│   ├── README.md                          # intstruction for reproduction
│   ├── test.sh                          # bash for testing.
│   ├── train.py                        # code for training model
│   └── train.sh                        # bash for training model
└── datasets/
    ├── CIFAR100/                      # CIFAR100 devkit
    ├── mini-imagenet/           
    │   ├── train/                         # (dir.) training images (from Google Drive)
    │   ├── test/                           # (dir.) testing images (from Google Drive)
    │   └── ..some csv files..
    └── cub/                                   # (dir.) contains 200 object classes
        ├── train/                             # (dir.) training images (from Google Drive)
        └── test/                               # (dir.) testing images (from Google Drive)

Training

Choose apporopriate lines in train.sh file.

sh train.sh
  • '--base_epochs' can be modified to control the initial accuracy ('Our' vs 'Our*' in our paper).
  • Training takes approx. several hours until convergence (trained with one 2080 Ti or 3090 GPUs).

Testing

1. Download pretrained models to the 'pre' folder.

Pretrained models are available on our [Google Drive].

2. Test

Choose apporopriate lines in train.sh file.

sh test.sh 

Main Results

The experimental results with 'test.sh 'for three datasets are shown below.

1. CIFAR-100

Model 1 2 3 4 5 6 7 8 9
iCaRL 64.10 53.28 41.69 34.13 27.93 25.06 20.41 15.48 13.73
TOPIC 64.10 56.03 47.89 42.99 38.02 34.60 31.67 28.35 25.86
Ours 63.97 65.86 61.31 57.6 53.39 50.93 48.27 45.36 43.32

2. MiniImageNet

Model 1 2 3 4 5 6 7 8 9
iCaRL 61.31 46.32 42.94 37.63 30.49 24.00 20.89 18.80 17.21
TOPIC 61.31 45.58 43.77 37.19 32.38 29.67 26.44 25.18 21.80
Ours 61.45 63.80 59.53 55.53 52.50 49.60 46.69 43.79 41.92

3. CUB200

Model 1 2 3 4 5 6 7 8 9 10 11
iCaRL 68.68 52.65 48.61 44.16 36.62 29.52 27.83 26.26 24.01 23.89 21.16
TOPIC 68.68 61.01 55.35 50.01 42.42 39.07 35.47 32.87 30.04 25.91 24.85
Ours 68.05 62.01 57.61 53.67 50.77 46.76 45.43 44.53 41.74 39.93 38.45

The presented results are slightly different from those in the paper, which are the average results of multiple tests.

BibTeX

If you use this code for your research, please consider citing:

@inproceedings{zhu2021self,
  title={Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning},
  author={Zhu, Kai and Cao, Yang and Zhai, Wei and Cheng, Jie and Zha, Zheng-Jun},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={6801--6810},
  year={2021}
}
Owner
Kai Zhu
Kai Zhu
Testing the Facial Emotion Recognition (FER) algorithm on animations

PegHeads-Tutorial-3 Testing the Facial Emotion Recognition (FER) algorithm on animations

PegHeads Inc 2 Jan 03, 2022
Bayesian optimisation library developped by Huawei Noah's Ark Library

Bayesian Optimisation Research This directory contains official implementations for Bayesian optimisation works developped by Huawei R&D, Noah's Ark L

HUAWEI Noah's Ark Lab 395 Dec 30, 2022
PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset

PyTorch Large-Scale Language Model A Large-Scale PyTorch Language Model trained on the 1-Billion Word (LM1B) / (GBW) dataset Latest Results 39.98 Perp

Ryan Spring 114 Nov 04, 2022
pytorch, hand(object) detect ,yolo v5,手检测

YOLO V5 物体检测,包括手部检测。 项目介绍 手部检测 手部检测示例如下 : 视频示例: 项目配置 作者开发环境: Python 3.7 PyTorch = 1.5.1 数据集 手部检测数据集 该项目数据集采用 TV-Hand 和 COCO-Hand (COCO-Hand-Big 部分) 进

Eric.Lee 11 Dec 20, 2022
Python implementation of "Multi-Instance Pose Networks: Rethinking Top-Down Pose Estimation"

MIPNet: Multi-Instance Pose Networks This repository is the official pytorch python implementation of "Multi-Instance Pose Networks: Rethinking Top-Do

Rawal Khirodkar 57 Dec 12, 2022
Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization

Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization This repository contains the code for the BBI optimizer, introduced in the p

G. Bruno De Luca 5 Sep 06, 2022
8-week curriculum for AI Builders

curriculum 8-week curriculum for AI Builders สารบัญ บทที่ 1 - Machine Learning คืออะไร บทที่ 2 - ชุดข้อมูลมหัศจรรย์และถิ่นที่อยู่ บทที่ 3 - Stochastic

AI Builders 134 Jan 03, 2023
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

DART Implementation for ICLR2022 paper Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners. Environment

ZJUNLP 83 Dec 27, 2022
Pytorch implementation of XRD spectral identification from COD database

XRDidentifier Pytorch implementation of XRD spectral identification from COD database. Details will be explained in the paper to be submitted to NeurI

Masaki Adachi 4 Jan 07, 2023
Quasi-Dense Similarity Learning for Multiple Object Tracking, CVPR 2021 (Oral)

Quasi-Dense Tracking This is the offical implementation of paper Quasi-Dense Similarity Learning for Multiple Object Tracking. We present a trailer th

ETH VIS Research Group 327 Dec 27, 2022
Implementation of UNET architecture for Image Segmentation.

Semantic Segmentation using UNET This is the implementation of UNET on Carvana Image Masking Kaggle Challenge About the Dataset This dataset contains

Anushka agarwal 4 Dec 21, 2021
Pytorch and Keras Implementations of Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects.

The repository contains the implementations for Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects. Model

Ankur Deria 115 Jan 06, 2023
Code for the ICCV'21 paper "Context-aware Scene Graph Generation with Seq2Seq Transformers"

ICCV'21 Context-aware Scene Graph Generation with Seq2Seq Transformers Authors: Yichao Lu*, Himanshu Rai*, Cheng Chang*, Boris Knyazev†, Guangwei Yu,

Layer6 Labs 37 Dec 18, 2022
Pytorch Implementation of LNSNet for Superpixel Segmentation

LNSNet Overview Official implementation of Learning the Superpixel in a Non-iterative and Lifelong Manner (CVPR'21) Learning Strategy The proposed LNS

42 Oct 11, 2022
OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers (NeurIPS 2021)

OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers (NeurIPS 2021) This is an PyTorch implementation of OpenMatc

Vision and Learning Group 38 Dec 26, 2022
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

130 Dec 25, 2022
Official implementation of the paper "Steganographer Detection via a Similarity Accumulation Graph Convolutional Network"

SAGCN - Official PyTorch Implementation | Paper | Project Page This is the official implementation of the paper "Steganographer detection via a simila

ZHANG Zhi 1 Nov 26, 2021
CvT2DistilGPT2 is an encoder-to-decoder model that was developed for chest X-ray report generation.

CvT2DistilGPT2 Improving Chest X-Ray Report Generation by Leveraging Warm-Starting This repository houses the implementation of CvT2DistilGPT2 from [1

The Australian e-Health Research Centre 21 Dec 28, 2022
Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021) Jiaxi Jiang, Kai Zhang, Radu Timofte Computer Vision Lab, ETH Zurich, Switzerland 🔥

Jiaxi Jiang 282 Jan 02, 2023
repro_eval is a collection of measures to evaluate the reproducibility/replicability of system-oriented IR experiments

repro_eval repro_eval is a collection of measures to evaluate the reproducibility/replicability of system-oriented IR experiments. The measures were d

IR Group at Technische Hochschule Köln 9 May 25, 2022