Neighborhood Contrastive Learning for Novel Class Discovery

Related tags

Deep LearningNCL
Overview

Neighborhood Contrastive Learning for Novel Class Discovery

License PyTorch

This repository contains the official implementation of our paper:

Neighborhood Contrastive Learning for Novel Class Discovery, CVPR 2021
Zhun Zhong, Enrico Fini, Subhankar Roy, Zhiming Luo, Elisa Ricci, Nicu Sebe

Requirements

PyTorch >= 1.1

Data preparation

We follow AutoNovel to prepare the data

By default, we save the dataset in ./data/datasets/ and trained models in ./data/experiments/.

  • For CIFAR-10 and CIFAR-100, the datasets can be automatically downloaded by PyTorch.

  • For ImageNet, we use the exact split files used in the experiments following existing work. To download the split files, run the command: sh scripts/download_imagenet_splits.sh . The ImageNet dataset folder is organized in the following way:

    ImageNet/imagenet_rand118 #downloaded by the above command
    ImageNet/images/train #standard ImageNet training split
    ImageNet/images/val #standard ImageNet validation split
    

Pretrained models

We use the pretrained models (self-supervised learning and supervised learning) provided by AutoNovel. To download, run:

sh scripts/download_pretrained_models.sh

If you would like to train the self-supervised learning and supervised learning models by yourself, please refer to AutoNovel for more details.

After downloading, you can go to perform our neighbor contrastive learning below.

Neighborhood Contrastive Learning for Novel Class Discovery

CIFAR10/CIFAR100

Without Hard Negative Generation (w/o HNG)
# Train on CIFAR10
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_cifar10.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar10.pth

# Train on CIFAR100
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_cifar100.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar100.pth
With Hard Negative Generation (w/ HNG)
# Train on CIFAR10
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_hng_cifar10.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar10.pth

# Train on CIFAR100
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_hng_cifar100.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar100.pth

Note that, for cifar-10, we suggest to train the model w/o HNG, because the results of w HNG and w/o HNG for cifar-10 are similar. In addition, the model w/ HNG sometimes will collapse, but you can try different seeds to get the normal result.

ImageNet

Without Hard Negative Generation (w/o HNG)
# Subset A
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --unlabeled_subset A --model_name resnet_imagenet_ncl

# Subset B
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --unlabeled_subset B --model_name resnet_imagenet_ncl

# Subset C
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --unlabeled_subset C --model_name resnet_imagenet_ncl
With Hard Negative Generation (w/o HNG)
# Subset A
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --hard_negative_start 3 --unlabeled_subset A --model_name resnet_imagenet_ncl_hng

# Subset B
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --hard_negative_start 3 --unlabeled_subset B --model_name resnet_imagenet_ncl_hng

# Subset C
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --hard_negative_start 3 --unlabeled_subset C --model_name resnet_imagenet_ncl_hng

Acknowledgement

Our code is heavily designed based on AutoNovel. If you use this code, please also acknowledge their paper.

Citation

We hope you find our work useful. If you would like to acknowledge it in your project, please use the following citation:

@InProceedings{Zhong_2021_CVPR,
      author    = {Zhong, Zhun and Fini, Enrico and Roy, Subhankar and Luo, Zhiming and Ricci, Elisa and Sebe, Nicu},
      title     = {Neighborhood Contrastive Learning for Novel Class Discovery},
      booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      month     = {June},
      year      = {2021},
      pages     = {10867-10875}
}

Contact me

If you have any questions about this code, please do not hesitate to contact me.

Zhun Zhong

Owner
Zhun Zhong
Zhun Zhong
Modeling CNN layers activity with Gaussian mixture model

GMM-CNN This code package implements the modeling of CNN layers activity with Gaussian mixture model and Inference Graphs visualization technique from

3 Aug 05, 2022
3D cascade RCNN for object detection on point cloud

3D Cascade RCNN This is the implementation of 3D Cascade RCNN: High Quality Object Detection in Point Clouds. We designed a 3D object detection model

Qi Cai 22 Dec 02, 2022
《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)

A-CNN: Annularly Convolutional Neural Networks on Point Clouds Created by Artem Komarichev, Zichun Zhong, Jing Hua from Department of Computer Science

Artёm Komarichev 44 Feb 24, 2022
Self-Supervised Deep Blind Video Super-Resolution

Self-Blind-VSR Paper | Discussion Self-Supervised Deep Blind Video Super-Resolution By Haoran Bai and Jinshan Pan Abstract Existing deep learning-base

Haoran Bai 35 Dec 09, 2022
Deep Learning GPU Training System

DIGITS DIGITS (the Deep Learning GPU Training System) is a webapp for training deep learning models. The currently supported frameworks are: Caffe, To

NVIDIA Corporation 4.1k Jan 03, 2023
SparseInst: Sparse Instance Activation for Real-Time Instance Segmentation, CVPR 2022

SparseInst 🚀 A simple framework for real-time instance segmentation, CVPR 2022 by Tianheng Cheng, Xinggang Wang†, Shaoyu Chen, Wenqiang Zhang, Qian Z

Hust Visual Learning Team 458 Jan 05, 2023
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral

Temporally Efficient Vision Transformer for Video Instance Segmentation Temporally Efficient Vision Transformer for Video Instance Segmentation (CVPR

Hust Visual Learning Team 203 Dec 31, 2022
MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモ

Tokyo2020-Pictogram-using-MediaPipe MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモです。 Tokyo2020Pictgram02.mp4 Requirement mediapipe 0.8.6 or later O

KazuhitoTakahashi 295 Dec 26, 2022
🔥 Real-time Super Resolution enhancement (4x) with content loss and relativistic adversarial optimization 🔥

🔥 Real-time Super Resolution enhancement (4x) with content loss and relativistic adversarial optimization 🔥

Rishik Mourya 48 Dec 20, 2022
Server files for UltimateLabeling

UltimateLabeling server files Server files for UltimateLabeling. git clone https://github.com/alexandre01/UltimateLabeling_server.git cd UltimateLabel

Alexandre Carlier 4 Oct 10, 2022
codes for Image Inpainting with External-internal Learning and Monochromic Bottleneck

Image Inpainting with External-internal Learning and Monochromic Bottleneck This repository is for the CVPR 2021 paper: 'Image Inpainting with Externa

97 Nov 29, 2022
Reinforcement Learning for Portfolio Management

qtrader Reinforcement Learning for Portfolio Management Why Reinforcement Learning? Learns the optimal action, rather than models the market. Adaptive

Angelos Filos 406 Jan 01, 2023
Human Action Controller - A human action controller running on different platforms.

Human Action Controller (HAC) Goal A human action controller running on different platforms. Fun Easy-to-use Accurate Anywhere Fun Examples Mouse Cont

27 Jul 20, 2022
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

Creating Robust Representations from Pre-Trained Image Encoders using Contrastive Learning Sriram Ravula, Georgios Smyrnis This is the code for our pr

Sriram Ravula 26 Dec 10, 2022
Based on Stockfish neural network(similar to LcZero)

MarcoEngine Marco Engine - interesnaya neyronnaya shakhmatnaya set', kotoraya ispol'zuyet metod samoobucheniya(dostizheniye khoroshoy igy putem proboy

Marcus Kemaul 4 Mar 12, 2022
Convnext-tf - Unofficial tensorflow keras implementation of ConvNeXt

ConvNeXt Tensorflow This is unofficial tensorflow keras implementation of ConvNe

29 Oct 06, 2022
Train CPPNs as a Generative Model, using Generative Adversarial Networks and Variational Autoencoder techniques to produce high resolution images.

cppn-gan-vae tensorflow Train Compositional Pattern Producing Network as a Generative Model, using Generative Adversarial Networks and Variational Aut

hardmaru 343 Dec 29, 2022
An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"

Channel LM Prompting (and beyond) This includes an original implementation of Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer. "Noisy Cha

Sewon Min 92 Jan 07, 2023
Reproduction process of AlexNet

PaddlePaddle论文复现杂谈 背景 注:该repo基于PaddlePaddle,对AlexNet进行复现。时间仓促,难免有所疏漏,如果问题或者想法,欢迎随时提issue一块交流。 飞桨论文复现赛地址:https://aistudio.baidu.com/aistudio/competitio

19 Nov 29, 2022
A Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population

DeepKE is a knowledge extraction toolkit supporting low-resource and document-level scenarios for entity, relation and attribute extraction. We provide comprehensive documents, Google Colab tutorials

ZJUNLP 1.6k Jan 05, 2023