Neighborhood Contrastive Learning for Novel Class Discovery

Related tags

Deep LearningNCL
Overview

Neighborhood Contrastive Learning for Novel Class Discovery

License PyTorch

This repository contains the official implementation of our paper:

Neighborhood Contrastive Learning for Novel Class Discovery, CVPR 2021
Zhun Zhong, Enrico Fini, Subhankar Roy, Zhiming Luo, Elisa Ricci, Nicu Sebe

Requirements

PyTorch >= 1.1

Data preparation

We follow AutoNovel to prepare the data

By default, we save the dataset in ./data/datasets/ and trained models in ./data/experiments/.

  • For CIFAR-10 and CIFAR-100, the datasets can be automatically downloaded by PyTorch.

  • For ImageNet, we use the exact split files used in the experiments following existing work. To download the split files, run the command: sh scripts/download_imagenet_splits.sh . The ImageNet dataset folder is organized in the following way:

    ImageNet/imagenet_rand118 #downloaded by the above command
    ImageNet/images/train #standard ImageNet training split
    ImageNet/images/val #standard ImageNet validation split
    

Pretrained models

We use the pretrained models (self-supervised learning and supervised learning) provided by AutoNovel. To download, run:

sh scripts/download_pretrained_models.sh

If you would like to train the self-supervised learning and supervised learning models by yourself, please refer to AutoNovel for more details.

After downloading, you can go to perform our neighbor contrastive learning below.

Neighborhood Contrastive Learning for Novel Class Discovery

CIFAR10/CIFAR100

Without Hard Negative Generation (w/o HNG)
# Train on CIFAR10
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_cifar10.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar10.pth

# Train on CIFAR100
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_cifar100.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar100.pth
With Hard Negative Generation (w/ HNG)
# Train on CIFAR10
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_hng_cifar10.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar10.pth

# Train on CIFAR100
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_hng_cifar100.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar100.pth

Note that, for cifar-10, we suggest to train the model w/o HNG, because the results of w HNG and w/o HNG for cifar-10 are similar. In addition, the model w/ HNG sometimes will collapse, but you can try different seeds to get the normal result.

ImageNet

Without Hard Negative Generation (w/o HNG)
# Subset A
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --unlabeled_subset A --model_name resnet_imagenet_ncl

# Subset B
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --unlabeled_subset B --model_name resnet_imagenet_ncl

# Subset C
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --unlabeled_subset C --model_name resnet_imagenet_ncl
With Hard Negative Generation (w/o HNG)
# Subset A
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --hard_negative_start 3 --unlabeled_subset A --model_name resnet_imagenet_ncl_hng

# Subset B
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --hard_negative_start 3 --unlabeled_subset B --model_name resnet_imagenet_ncl_hng

# Subset C
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --hard_negative_start 3 --unlabeled_subset C --model_name resnet_imagenet_ncl_hng

Acknowledgement

Our code is heavily designed based on AutoNovel. If you use this code, please also acknowledge their paper.

Citation

We hope you find our work useful. If you would like to acknowledge it in your project, please use the following citation:

@InProceedings{Zhong_2021_CVPR,
      author    = {Zhong, Zhun and Fini, Enrico and Roy, Subhankar and Luo, Zhiming and Ricci, Elisa and Sebe, Nicu},
      title     = {Neighborhood Contrastive Learning for Novel Class Discovery},
      booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      month     = {June},
      year      = {2021},
      pages     = {10867-10875}
}

Contact me

If you have any questions about this code, please do not hesitate to contact me.

Zhun Zhong

Owner
Zhun Zhong
Zhun Zhong
A web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python.

Xcessiv Xcessiv is a tool to help you create the biggest, craziest, and most excessive stacked ensembles you can think of. Stacked ensembles are simpl

Reiichiro Nakano 1.3k Nov 17, 2022
QKeras: a quantization deep learning library for Tensorflow Keras

QKeras github.com/google/qkeras QKeras 0.8 highlights: Automatic quantization using QKeras; Stochastic behavior (including stochastic rouding) is disa

Google 437 Jan 03, 2023
SurfEmb (CVPR 2022) - SurfEmb: Dense and Continuous Correspondence Distributions

SurfEmb SurfEmb: Dense and Continuous Correspondence Distributions for Object Pose Estimation with Learnt Surface Embeddings Rasmus Laurvig Haugard, A

Rasmus Haugaard 56 Nov 19, 2022
JFB: Jacobian-Free Backpropagation for Implicit Models

JFB: Jacobian-Free Backpropagation for Implicit Models

Typal Research 28 Dec 11, 2022
Learning 3D Part Assembly from a Single Image

Learning 3D Part Assembly from a Single Image This repository contains a PyTorch implementation of the paper: Learning 3D Part Assembly from A Single

18 Dec 21, 2022
Simple Python application to transform Serial data into OSC messages

SerialToOSC-Bridge Simple Python application to transform Serial data into OSC messages. The current purpose is to be a compatibility layer between ha

Division of Applied Acoustics at Chalmers University of Technology 3 Jun 03, 2021
For encoding a text longer than 512 tokens, for example 800. Set max_pos to 800 during both preprocessing and training.

LongScientificFormer For encoding a text longer than 512 tokens, for example 800. Set max_pos to 800 during both preprocessing and training. Some code

Athar Sefid 6 Nov 02, 2022
Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.

Torch-template-for-deep-learning Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and

Li Shengyan 270 Dec 31, 2022
The official implementation of CircleNet: Anchor-free Detection with Circle Representation, MICCAI 2030

CircleNet: Anchor-free Detection with Circle Representation The official implementation of CircleNet, MICCAI 2020 [PyTorch] [project page] [MICCAI pap

The Biomedical Data Representation and Learning Lab 45 Nov 18, 2022
[BMVC2021] "TransFusion: Cross-view Fusion with Transformer for 3D Human Pose Estimation"

TransFusion-Pose TransFusion: Cross-view Fusion with Transformer for 3D Human Pose Estimation Haoyu Ma, Liangjian Chen, Deying Kong, Zhe Wang, Xingwei

Haoyu Ma 29 Dec 23, 2022
Official code for "On the Frequency Bias of Generative Models", NeurIPS 2021

Frequency Bias of Generative Models Generator Testbed Discriminator Testbed This repository contains official code for the paper On the Frequency Bias

35 Nov 01, 2022
E2VID_ROS - E2VID_ROS: E2VID to a real-time system

E2VID_ROS Introduce We extend E2VID to a real-time system. Because Python ROS ca

Robin Shaun 7 Apr 17, 2022
Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

On Adversarial Robustness: A Neural Architecture Search perspective Preparation: Clone the repository: https://github.com/tdchaitanya/nas-robustness.g

Chaitanya Devaguptapu 4 Nov 10, 2022
More Photos are All You Need: Semi-Supervised Learning for Fine-Grained Sketch Based Image Retrieval

More Photos are All You Need: Semi-Supervised Learning for Fine-Grained Sketch Based Image Retrieval, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdh

Ayan Kumar Bhunia 22 Aug 27, 2022
Code for the bachelors-thesis flaky fault localization

Flaky_Fault_Localization Scripts for the Bachelors-Thesis: "Flaky Fault Localization" by Christian Kasberger. The thesis examines the usefulness of sp

Christian Kasberger 1 Oct 26, 2021
A Demo server serving Bert through ONNX with GPU written in Rust with <3

Demo BERT ONNX server written in rust This demo showcase the use of onnxruntime-rs on BERT with a GPU on CUDA 11 served by actix-web and tokenized wit

Xavier Tao 28 Jan 01, 2023
MAME is a multi-purpose emulation framework.

MAME's purpose is to preserve decades of software history. As electronic technology continues to rush forward, MAME prevents this important "vintage" software from being lost and forgotten.

Michael Murray 6 Oct 25, 2020
A naive ROS interface for visualDet3D.

YOLO3D ROS Node This repo contains a Monocular 3D detection Ros node. Base on https://github.com/Owen-Liuyuxuan/visualDet3D All parameters are exposed

Yuxuan Liu 19 Oct 08, 2022
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21) Citation If y

addisonwang 18 Nov 11, 2022
AEI: Actors-Environment Interaction with Adaptive Attention for Temporal Action Proposals Generation

AEI: Actors-Environment Interaction with Adaptive Attention for Temporal Action Proposals Generation A pytorch-version implementation codes of paper:

11 Dec 13, 2022