Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

Related tags

Deep LearningCARE
Overview

Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

This repository is the official implementation of CARE. Graph

Updates

  • (09/10/2021) Our paper is accepted by NeurIPS 2021.

Requirements

To install requirements:

conda create -n care python=3.6
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch
pip install tensorboard
pip install ipdb
pip install einops
pip install loguru
pip install pyarrow==3.0.0
pip install tqdm

📋 Pytorch>=1.6 is needed for runing the code.

Data Preparation

Prepare the ImageNet data in {data_path}/train.lmdb and {data_path}/val.lmdb

Relpace the original data path in care/data/dataset_lmdb (Line7 and Line40) with your new {data_path}.

📋 Note that we use the lmdb file to speed-up the data-processing procedure.

Training

Before training the ResNet-50 (100 epoch) in the paper, run this command first to add your PYTHONPATH:

export PYTHONPATH=$PYTHONPATH:{your_code_path}/care/
export PYTHONPATH=$PYTHONPATH:{your_code_path}/care/care/

Then run the training code via:

bash run_train.sh      #(The training script is used for trianing CARE with 8 gpus)
bash single_gpu_train.sh    #(We also provide the script for trainig CARE with only one gpu)

📋 The training script is used to do unsupervised pre-training of a ResNet-50 model on ImageNet in an 8-gpu machine

  1. using -b to specify batch_size, e.g., -b 128
  2. using -d to specify gpu_id for training, e.g., -d 0-7
  3. using --log_path to specify the main folder for saving experimental results.
  4. using --experiment-name to specify the folder for saving training outputs.

The code base also supports for training other backbones (e.g., ResNet101 and ResNet152) with different training schedules (e.g., 200, 400 and 800 epochs).

Evaluation

Before start the evaluation, run this command first to add your PYTHONPATH:

export PYTHONPATH=$PYTHONPATH:{your_code_path}/care/
export PYTHONPATH=$PYTHONPATH:{your_code_path}/care/care/

Then, to evaluate the pre-trained model (e.g., ResNet50-100epoch) on ImageNet, run:

bash run_val.sh      #(The training script is used for evaluating CARE with 8 gpus)
bash debug_val.sh    #(We also provide the script for evaluating CARE with only one gpu)

📋 The training script is used to do the supervised linear evaluation of a ResNet-50 model on ImageNet in an 8-gpu machine

  1. using -b to specify batch_size, e.g., -b 128
  2. using -d to specify gpu_id for training, e.g., -d 0-7
  3. Modifying --log_path according to your own config.
  4. Modifying --experiment-name according to your own config.

Pre-trained Models

We here provide some pre-trained models in the [shared folder]:

Here are some examples.

  • [ResNet-50 100epoch] trained on ImageNet using ResNet-50 with 100 epochs.
  • [ResNet-50 200epoch] trained on ImageNet using ResNet-50 with 200 epochs.
  • [ResNet-50 400epoch] trained on ImageNet using ResNet-50 with 400 epochs.

More models are provided in the following model zoo part.

📋 We will provide more pretrained models in the future.

Model Zoo

Our model achieves the following performance on :

Self-supervised learning on image classifications.

Method Backbone epoch Top-1 Top-5 pretrained model linear evaluation model
CARE ResNet50 100 72.02% 90.02% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50 200 73.78% 91.50% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50 400 74.68% 91.97% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50 800 75.56% 92.32% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50(2x) 100 73.51% 91.66% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50(2x) 200 75.00% 92.22% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50(2x) 400 76.48% 92.99% [pretrained] (wip) [linear_model] (wip)
CARE ResNet50(2x) 800 77.04% 93.22% [pretrained] (wip) [linear_model] (wip)
CARE ResNet101 100 73.54% 91.63% [pretrained] (wip) [linear_model] (wip)
CARE ResNet101 200 75.89% 92.70% [pretrained] (wip) [linear_model] (wip)
CARE ResNet101 400 76.85% 93.31% [pretrained] (wip) [linear_model] (wip)
CARE ResNet101 800 77.23% 93.52% [pretrained] (wip) [linear_model] (wip)
CARE ResNet152 100 74.59% 92.09% [pretrained] (wip) [linear_model] (wip)
CARE ResNet152 200 76.58% 93.63% [pretrained] (wip) [linear_model] (wip)
CARE ResNet152 400 77.40% 93.63% [pretrained] (wip) [linear_model] (wip)
CARE ResNet152 800 78.11% 93.81% [pretrained] (wip) [linear_model] (wip)

Transfer learning to object detection and semantic segmentation.

COCO det

Method Backbone epoch AP_bb AP_50 AP_75 pretrained model det/seg model
CARE ResNet50 200 39.4 59.2 42.6 [pretrained] (wip) [model] (wip)
CARE ResNet50 400 39.6 59.4 42.9 [pretrained] (wip) [model] (wip)
CARE ResNet50-FPN 200 39.5 60.2 43.1 [pretrained] (wip) [model] (wip)
CARE ResNet50-FPN 400 39.8 60.5 43.5 [pretrained] (wip) [model] (wip)

COCO instance seg

Method Backbone epoch AP_mk AP_50 AP_75 pretrained model det/seg model
CARE ResNet50 200 34.6 56.1 36.8 [pretrained] (wip) [model] (wip)
CARE ResNet50 400 34.7 56.1 36.9 [pretrained] (wip) [model] (wip)
CARE ResNet50-FPN 200 35.9 57.2 38.5 [pretrained] (wip) [model] (wip)
CARE ResNet50-FPN 400 36.2 57.4 38.8 [pretrained] (wip) [model] (wip)

VOC07+12 det

Method Backbone epoch AP_bb AP_50 AP_75 pretrained model det/seg model
CARE ResNet50 200 57.7 83.0 64.5 [pretrained] (wip) [model] (wip)
CARE ResNet50 400 57.9 83.0 64.7 [pretrained] (wip) [model] (wip)

📋 More results are provided in the paper.

Contributing

📋 WIP

Owner
ChongjianGE
🎯 PhD in Computer Vision ☑️ MSc & BEng in Electrical Engineering
ChongjianGE
Intrusion Test Tool with Python

P3ntsT00L Uma ferramenta escrita em Python, feita para Teste de intrusão. Requisitos ter o python 3.9.8 instalado em sua máquina. ter a git instalada

josh washington 2 Dec 27, 2021
Deep learning toolbox based on PyTorch for hyperspectral data classification.

Deep learning toolbox based on PyTorch for hyperspectral data classification.

Nicolas 304 Dec 28, 2022
Official implementation of "Motif-based Graph Self-Supervised Learning forMolecular Property Prediction"

Motif-based Graph Self-Supervised Learning for Molecular Property Prediction Official Pytorch implementation of NeurIPS'21 paper "Motif-based Graph Se

zaixi 71 Dec 20, 2022
Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder

ASEGAN: Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder 中文版简介 Readme with English Version 介绍 基于SEGAN模型的改进版本,使用自主设计的非

Nitin 53 Nov 17, 2022
Just Randoms Cats with python

Random-Cat Just Randoms Cats with python.

OriCode 2 Dec 21, 2021
A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.

Layer-wise Relevance Propagation (LRP) in PyTorch Basic unsupervised implementation of Layer-wise Relevance Propagation (Bach et al., Montavon et al.)

Kai Fabi 28 Dec 26, 2022
A set of examples around hub for creating and processing datasets

Examples for Hub - Dataset Format for AI A repository showcasing examples of using Hub Uploading Dataset Places365 Colab Tutorials Notebook Link Getti

Activeloop 11 Dec 14, 2022
Fuzzy Overclustering (FOC)

Fuzzy Overclustering (FOC) In real-world datasets, we need consistent annotations between annotators to give a certain ground-truth label. However, in

2 Nov 08, 2022
Final project for machine learning (CSC 590). Detection of hepatitis C and progression through blood samples.

Hepatitis C Blood Based Detection Final project for machine learning (CSC 590). Dataset from Kaggle. Using data from previous hepatitis C blood panels

Jennefer Maldonado 1 Dec 28, 2021
CLNTM - Contrastive Learning for Neural Topic Model

Contrastive Learning for Neural Topic Model This repository contains the impleme

Thong Thanh Nguyen 25 Nov 24, 2022
🏖 Keras Implementation of Painting outside the box

Keras implementation of Image OutPainting This is an implementation of Painting Outside the Box: Image Outpainting paper from Standford University. So

Bendang 1.1k Dec 10, 2022
Code and results accompanying our paper titled Mixture Proportion Estimation and PU Learning: A Modern Approach at Neurips 2021 (Spotlight)

Mixture Proportion Estimation and PU Learning: A Modern Approach This repository is the official implementation of Mixture Proportion Estimation and P

Approximately Correct Machine Intelligence (ACMI) Lab 23 Dec 28, 2022
Two types of Recommender System : Content-based Recommender System and Colaborating filtering based recommender system

Recommender-Systems Two types of Recommender System : Content-based Recommender System and Colaborating filtering based recommender system So the data

Yash Kumar 0 Jan 20, 2022
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty

AugMix Introduction We propose AugMix, a data processing technique that mixes augmented images and enforces consistent embeddings of the augmented ima

Google Research 876 Dec 17, 2022
Official Pytorch implementation for "End2End Occluded Face Recognition by Masking Corrupted Features, TPAMI 2021"

End2End Occluded Face Recognition by Masking Corrupted Features This is the Pytorch implementation of our TPAMI 2021 paper End2End Occluded Face Recog

Haibo Qiu 25 Oct 31, 2022
You Only Look One-level Feature (YOLOF), CVPR2021, Detectron2

You Only Look One-level Feature (YOLOF), CVPR2021 A simple, fast, and efficient object detector without FPN. This repo provides a neat implementation

qiang chen 273 Jan 03, 2023
利用yolov5和TensorRT从0到1实现目标检测的模型训练到模型部署全过程

写在前面 利用TensorRT加速推理速度是以时间换取精度的做法,意味着在推理速度上升的同时将会有精度的下降,不过不用太担心,精度下降微乎其微。此外,要有NVIDIA显卡,经测试,CUDA10.2可以支持20系列显卡及以下,30系列显卡需要CUDA11.x的支持,并且目前有bug。 默认你已经完成了

Helium 6 Jul 28, 2022
The Official TensorFlow Implementation for SPatchGAN (ICCV2021)

SPatchGAN: Official TensorFlow Implementation Paper "SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation"

39 Dec 30, 2022
Implementation of the Swin Transformer in PyTorch.

Swin Transformer - PyTorch Implementation of the Swin Transformer architecture. This paper presents a new vision Transformer, called Swin Transformer,

597 Jan 03, 2023
Official implementation of MSR-GCN (ICCV 2021 paper)

MSR-GCN Official implementation of MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human Motion Prediction (ICCV 2021 paper) [Paper] [Sup

LevonDang 42 Nov 07, 2022