Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency

Overview

Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency

This is a official implementation of the CycleContrast introduced in the paper:Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency

Citation

If you find our work useful, please cite:

@article{wu2021contrastive,
  title={Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency},
  author={Wu, Haiping and Wang, Xiaolong},
  journal={arXiv preprint arXiv:2105.06463},
  year={2021}
}

Preparation

Our code is tested on Python 3.7 and Pytorch 1.3.0, please install the environment via

pip install -r requirements.txt

Model Zoo

We provide the model pretrained on R2V2 for 200 epochs.

method pre-train epochs on R2V2 dataset ImageNet Top-1 Linear Eval OTB Precision OTB Success UCF Top-1 pretrained model
MoCo 200 53.8 56.1 40.6 80.5 pretrain ckpt
CycleContrast 200 55.7 69.6 50.4 82.8 pretrain ckpt

Run Experiments

Data preparation

Download R2V2 (Random Related Video Views) dataset according to https://github.com/danielgordon10/vince.

The direction structure should be as followed:

CycleContrast
├── cycle_contrast 
├── scripts 
├── utils 
├── data
│   ├── r2v2_large_with_ids 
│   │   ├── train 
│   │   │   ├── --/
│   │   │   ├── -_/
│   │   │   ├── _-/
│   │   │   ├── __/
│   │   │   ├── -0/
│   │   │   ├── _0/
│   │   │   ├── ...
│   │   │   ├── zZ/
│   │   │   ├── zz/
│   │   ├── val
│   │   │   ├── --/
│   │   │   ├── -_/
│   │   │   ├── _-/
│   │   │   ├── __/
│   │   │   ├── -0/
│   │   │   ├── _0/
│   │   │   ├── ...
│   │   │   ├── zZ/
│   │   │   ├── zz/

Unsupervised Pretrain

./scripts/train_cycle.sh

Downstream task - ImageNet linear eval

Prepare ImageNet dataset according to pytorch ImageNet training code.

MODEL_DIR=output/cycle_res50_r2v2_ep200
IMAGENET_DATA=data/ILSVRC/Data/CLS-LOC
./scripts/eval_ImageNet.sh $MODEL_DIR $IMAGENET_DATA

Downstream task - OTB tracking

Transfer to OTB tracking evaluation is based on SiamFC-Pytorch. Please prepare environment and data according to SiamFC-Pytorch

git clone https://github.com/happywu/mmaction2-CycleContrast
# path to your pretrained model, change accordingly
CycleContrast=/home/user/code/CycleContrast
PRETRAIN=${CycleContrast}/output/cycle_res50_r2v2_ep200/checkpoint_0199.pth.tar
cd mmaction2_tracking
./scripts/submit_r2v2_r50_cycle.py ${PRETRAIN}

Downstream task - UCF classification

Transfer to UCF action recognition evaluation is based on AVID-CMA, prepare data and env according to AVID-CMA.

git clone https://github.com/happywu/AVID-CMA-CycleContrast
# path to your pretrained model, change accordingly
CycleContrast=/home/user/code/CycleContrast
PRETRAIN=${CycleContrast}/output/cycle_res50_r2v2_ep200/checkpoint_0199.pth.tar
cd AVID-CMA-CycleContrast 
./scripts/submit_r2v2_r50_cycle.py ${PRETRAIN}

Acknowledgements

The codebase is based on FAIR-MoCo. The OTB tracking evaluation is based on MMAction2, SiamFC-PyTorch and vince. The UCF classification evaluation follows AVID-CMA.

Thank you all for the great open source repositories!

You might also like...
[ICCV'21] Official implementation for the paper  Social NCE: Contrastive Learning of Socially-aware Motion Representations
[ICCV'21] Official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations

CrowdNav with Social-NCE This is an official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations by

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Supervised Contrastive Learning for Downstream Optimized Sequence Representations
Supervised Contrastive Learning for Downstream Optimized Sequence Representations

SupCL-Seq 📖 Supervised Contrastive Learning for Downstream Optimized Sequence representations (SupCS-Seq) accepted to be published in EMNLP 2021, ext

《LXMERT: Learning Cross-Modality Encoder Representations from Transformers》(EMNLP 2020)

The Most Important Thing. Our code is developed based on: LXMERT: Learning Cross-Modality Encoder Representations from Transformers

SUPERVISED-CONTRASTIVE-LEARNING-FOR-PRE-TRAINED-LANGUAGE-MODEL-FINE-TUNING - The Facebook paper about fine tuning RoBERTa with contrastive loss  Self-Learned Video Rain Streak Removal: When Cyclic Consistency Meets Temporal Correspondence
Self-Learned Video Rain Streak Removal: When Cyclic Consistency Meets Temporal Correspondence

In this paper, we address the problem of rain streaks removal in video by developing a self-learned rain streak removal method, which does not require any clean groundtruth images in the training process.

Cross Quality LFW: A database for Analyzing Cross-Resolution Image Face Recognition in Unconstrained Environments

Cross-Quality Labeled Faces in the Wild (XQLFW) Here, we release the database, evaluation protocol and code for the following paper: Cross Quality LFW

Pytorch Implementation for NeurIPS (oral) paper: Pixel Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

Pixel-Level Cycle Association This is the Pytorch implementation of our NeurIPS 2020 Oral paper Pixel-Level Cycle Association: A New Perspective for D

Code and models for ICCV2021 paper
Code and models for ICCV2021 paper "Robust Object Detection via Instance-Level Temporal Cycle Confusion".

Robust Object Detection via Instance-Level Temporal Cycle Confusion This repo contains the implementation of the ICCV 2021 paper, Robust Object Detect

The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"

Pixel-level Self-Paced Learning for Super-Resolution This is an official implementaion of the paper Pixel-level Self-Paced Learning for Super-Resoluti

Elon Lin 41 Dec 15, 2022
Differentiable Abundance Matching With Python

shamnet Differentiable Stellar Population Synthesis Installation You can install shamnet with pip. Installation dependencies are numpy, jax, corrfunc,

5 Dec 17, 2021
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Nurettin Sinanoğlu 2 Mar 04, 2022
Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation

CorrNet This project provides the code and results for 'Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation'

Gongyang Li 13 Nov 03, 2022
A Pytorch implementation of CVPR 2021 paper "RSG: A Simple but Effective Module for Learning Imbalanced Datasets"

RSG: A Simple but Effective Module for Learning Imbalanced Datasets (CVPR 2021) A Pytorch implementation of our CVPR 2021 paper "RSG: A Simple but Eff

120 Dec 12, 2022
The implementation of our CIKM 2021 paper titled as: "Cross-Market Product Recommendation"

FOREC: A Cross-Market Recommendation System This repository provides the implementation of our CIKM 2021 paper titled as "Cross-Market Product Recomme

Hamed Bonab 16 Sep 12, 2022
Source Code and data for my paper titled Linguistic Knowledge in Data Augmentation for Natural Language Processing: An Example on Chinese Question Matching

Description The source code and data for my paper titled Linguistic Knowledge in Data Augmentation for Natural Language Processing: An Example on Chin

Zhengxiang Wang 3 Jun 28, 2022
Official repository of IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSUMPTION.

IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSUMPTION This is the official repository of IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSU

电线杆 14 Dec 15, 2022
Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Beanie - is an asynchronous ODM for MongoDB, based on Motor and Pydantic. It uses an abstraction over Pydantic models and Motor collections to work wi

295 Dec 29, 2022
Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)

Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)- Emirhan BULUT

Emirhan BULUT 102 Nov 18, 2022
Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

2 Dec 28, 2021
ANEA: Automated (Named) Entity Annotation for German Domain-Specific Texts

ANEA The goal of Automatic (Named) Entity Annotation is to create a small annotated dataset for NER extracted from German domain-specific texts. Insta

Anastasia Zhukova 2 Oct 07, 2022
Model Serving Made Easy

The easiest way to build Machine Learning APIs BentoML makes moving trained ML models to production easy: Package models trained with any ML framework

BentoML 4.4k Jan 08, 2023
Semantic-aware Grad-GAN for Virtual-to-Real Urban Scene Adaption

SG-GAN TensorFlow implementation of SG-GAN. Prerequisites TensorFlow (implemented in v1.3) numpy scipy pillow Getting Started Train Prepare dataset. W

lplcor 61 Jun 07, 2022
This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21

Deep Virtual Markers This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21 Getting Started Get sa

KimHyomin 45 Oct 07, 2022
Code for IntraQ, PyTorch implementation of our paper under review

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization paper Requirements Python = 3.7.10 Pytorch == 1.7

1 Nov 19, 2021
RepVGG: Making VGG-style ConvNets Great Again

RepVGG: Making VGG-style ConvNets Great Again (PyTorch) This is a super simple ConvNet architecture that achieves over 80% top-1 accuracy on ImageNet

2.8k Jan 04, 2023
SmallInitEmb - LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence

SmallInitEmb LayerNorm(SmallInit(Embedding)) in a Transformer I find that when t

PENG Bo 11 Dec 25, 2022
Jingju baseline - A baseline model of our project of Beijing opera script generation

Jingju Baseline It is a baseline of our project about Beijing opera script gener

midon 1 Jan 14, 2022
Tree LSTM implementation in PyTorch

Tree-Structured Long Short-Term Memory Networks This is a PyTorch implementation of Tree-LSTM as described in the paper Improved Semantic Representati

Riddhiman Dasgupta 529 Dec 10, 2022