Neighborhood Contrastive Learning for Novel Class Discovery

Related tags

Deep LearningNCL
Overview

Neighborhood Contrastive Learning for Novel Class Discovery

License PyTorch

This repository contains the official implementation of our paper:

Neighborhood Contrastive Learning for Novel Class Discovery, CVPR 2021
Zhun Zhong, Enrico Fini, Subhankar Roy, Zhiming Luo, Elisa Ricci, Nicu Sebe

Requirements

PyTorch >= 1.1

Data preparation

We follow AutoNovel to prepare the data

By default, we save the dataset in ./data/datasets/ and trained models in ./data/experiments/.

  • For CIFAR-10 and CIFAR-100, the datasets can be automatically downloaded by PyTorch.

  • For ImageNet, we use the exact split files used in the experiments following existing work. To download the split files, run the command: sh scripts/download_imagenet_splits.sh . The ImageNet dataset folder is organized in the following way:

    ImageNet/imagenet_rand118 #downloaded by the above command
    ImageNet/images/train #standard ImageNet training split
    ImageNet/images/val #standard ImageNet validation split
    

Pretrained models

We use the pretrained models (self-supervised learning and supervised learning) provided by AutoNovel. To download, run:

sh scripts/download_pretrained_models.sh

If you would like to train the self-supervised learning and supervised learning models by yourself, please refer to AutoNovel for more details.

After downloading, you can go to perform our neighbor contrastive learning below.

Neighborhood Contrastive Learning for Novel Class Discovery

CIFAR10/CIFAR100

Without Hard Negative Generation (w/o HNG)
# Train on CIFAR10
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_cifar10.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar10.pth

# Train on CIFAR100
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_cifar100.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar100.pth
With Hard Negative Generation (w/ HNG)
# Train on CIFAR10
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_hng_cifar10.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar10.pth

# Train on CIFAR100
CUDA_VISIBLE_DEVICES=0 sh scripts/ncl_hng_cifar100.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/pretrained/supervised_learning/resnet_rotnet_cifar100.pth

Note that, for cifar-10, we suggest to train the model w/o HNG, because the results of w HNG and w/o HNG for cifar-10 are similar. In addition, the model w/ HNG sometimes will collapse, but you can try different seeds to get the normal result.

ImageNet

Without Hard Negative Generation (w/o HNG)
# Subset A
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --unlabeled_subset A --model_name resnet_imagenet_ncl

# Subset B
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --unlabeled_subset B --model_name resnet_imagenet_ncl

# Subset C
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --unlabeled_subset C --model_name resnet_imagenet_ncl
With Hard Negative Generation (w/o HNG)
# Subset A
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --hard_negative_start 3 --unlabeled_subset A --model_name resnet_imagenet_ncl_hng

# Subset B
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --hard_negative_start 3 --unlabeled_subset B --model_name resnet_imagenet_ncl_hng

# Subset C
CUDA_VISIBLE_DEVICES=0 python ncl_imagenet.py --hard_negative_start 3 --unlabeled_subset C --model_name resnet_imagenet_ncl_hng

Acknowledgement

Our code is heavily designed based on AutoNovel. If you use this code, please also acknowledge their paper.

Citation

We hope you find our work useful. If you would like to acknowledge it in your project, please use the following citation:

@InProceedings{Zhong_2021_CVPR,
      author    = {Zhong, Zhun and Fini, Enrico and Roy, Subhankar and Luo, Zhiming and Ricci, Elisa and Sebe, Nicu},
      title     = {Neighborhood Contrastive Learning for Novel Class Discovery},
      booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      month     = {June},
      year      = {2021},
      pages     = {10867-10875}
}

Contact me

If you have any questions about this code, please do not hesitate to contact me.

Zhun Zhong

Owner
Zhun Zhong
Zhun Zhong
StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system

StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system, initially used for researching optimal incentive parameters for Liquidations 2.0.

Blockchain at Berkeley 52 Nov 21, 2022
DeepLab2: A TensorFlow Library for Deep Labeling

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.

Google Research 845 Jan 04, 2023
Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)

Active Learning for Deep Object Detection via Probabilistic Modeling This repository is the official PyTorch implementation of Active Learning for Dee

NVIDIA Research Projects 130 Jan 06, 2023
Implicit Deep Adaptive Design (iDAD)

Implicit Deep Adaptive Design (iDAD) This code supports the NeurIPS paper 'Implicit Deep Adaptive Design: Policy-Based Experimental Design without Lik

Desi 12 Aug 14, 2022
Subnet Replacement Attack: Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks

Subnet Replacement Attack: Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks Official implementation of paper Towards Practic

Xiangyu Qi 8 Dec 30, 2022
Tensorflow implementation of Human-Level Control through Deep Reinforcement Learning

Human-Level Control through Deep Reinforcement Learning Tensorflow implementation of Human-Level Control through Deep Reinforcement Learning. This imp

Devsisters Corp. 2.4k Dec 26, 2022
Pytorch Implementation of Spiking Neural Networks Calibration, ICML 2021

SNN_Calibration Pytorch Implementation of Spiking Neural Networks Calibration, ICML 2021 Feature Comparison of SNN calibration: Features SNN Direct Tr

Yuhang Li 60 Dec 27, 2022
Original code for "Zero-Shot Domain Adaptation with a Physics Prior"

Zero-Shot Domain Adaptation with a Physics Prior [arXiv] [sup. material] - ICCV 2021 Oral paper, by Attila Lengyel, Sourav Garg, Michael Milford and J

Attila Lengyel 40 Dec 21, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
A large-image collection explorer and fast classification tool

IMAX: Interactive Multi-image Analysis eXplorer This is an interactive tool for visualize and classify multiple images at a time. It written in Python

Matias Carrasco Kind 23 Dec 16, 2022
Implementation of TimeSformer, a pure attention-based solution for video classification

TimeSformer - Pytorch Implementation of TimeSformer, a pure and simple attention-based solution for reaching SOTA on video classification.

Phil Wang 602 Jan 03, 2023
NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.

NVIDIA Merlin NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIA’s GPUs. It enables data scientists, machine

419 Jan 03, 2023
Pytorch implementation of DeepMind's differentiable neural computer paper.

DNC pytorch This is a Pytorch implementation of DeepMind's Differentiable Neural Computer (DNC) architecture introduced in their recent Nature paper:

Yuanpu Xie 91 Nov 21, 2022
Rayvens makes it possible for data scientists to access hundreds of data services within Ray with little effort.

Rayvens augments Ray with events. With Rayvens, Ray applications can subscribe to event streams, process and produce events. Rayvens leverages Apache

CodeFlare 32 Dec 25, 2022
J.A.R.V.I.S is an AI virtual assistant made in python.

J.A.R.V.I.S is an AI virtual assistant made in python. Running JARVIS Without Python To run JARVIS without python: 1. Head over to our installation pa

somePythonProgrammer 16 Dec 29, 2022
PyTorch implementation of SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching

SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching This is the official PyTorch implementation of SMODICE: Versatile Offline I

Jason Ma 14 Aug 30, 2022
Differentiable architecture search for convolutional and recurrent networks

Differentiable Architecture Search Code accompanying the paper DARTS: Differentiable Architecture Search Hanxiao Liu, Karen Simonyan, Yiming Yang. arX

Hanxiao Liu 3.7k Jan 09, 2023
Multiple Object Tracking with Yolov5!

Tracking with yolov5 This implementation is for who need to tracking multi-object only with detector. You can easily track mult-object with your well

9 Nov 08, 2022
Planar Prior Assisted PatchMatch Multi-View Stereo

ACMP [News] The code for ACMH is released!!! [News] The code for ACMM is released!!! About This repository contains the code for the paper Planar Prio

Qingshan Xu 127 Dec 31, 2022
DIVeR: Deterministic Integration for Volume Rendering

DIVeR: Deterministic Integration for Volume Rendering This repo contains the training and evaluation code for DIVeR. Setup python 3.8 pytorch 1.9.0 py

64 Dec 27, 2022