Official code for the paper "Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks".

Overview

Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks

This repository contains the official code for the paper Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks.

Requirements

This codebase has been tested with the following package versions:

python=3.8.8
torch=1.9.0+cu102
torchvision=0.10.0+cu102
PIL=8.1.0
numpy=1.19.2
scipy=1.6.1
tqdm=4.57.0
sklearn=0.24.1
albumentations=1.0.3

Prepare data

There are several classes defined in the datasets directory. The data is expected in a directory name data, located on the same level as this repository. Below is an outline of the expected file structure:

data/
    imagenet/
    CIFAR10/
    300W/
    ...
ssl-invariances/
    datasets/
    models/
    readme.md
    ...

For synthetic invariance evaluation, get the ILSVRC2012 validation data from https://image-net.org/ and store in ../data/imagenet/val/.

For real-world invariances, download the following datasets: Flickr1024, COIL-100, ALOI, ALOT, DaLI, ExposureErrors, RealBlur.

For extrinsic invariances, get Causal3DIdent.

Finally, our downstream datasets are CIFAR10, Caltech101, Flowers, 300W, CelebA, LSPose.

Pre-training models

We pre-train several models based on the MoCo codebase.

To set up a version of the codebase that can pre-train our models, first clone the MoCo repo onto the same level as this repo:

git clone https://github.com/facebookresearch/moco

This should be the resulting file structure:

data/
ssl-invariances/
moco/

Then copy the files from ssl-invariances/pretraining/ into the cloned repo:

cp ssl-invariances/pretraining/* moco/

Finally, to run our models, enter the cloned repo by cd moco and run one of the following:

# train the Default model
python main_moco.py -a resnet50 --model default --lr 0.03 --batch-size 256 --mlp --moco-t 0.2 --cos --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 ../data/imagenet

# train the Ventral model
python main_moco.py -a resnet50 --model ventral --lr 0.03 --batch-size 256 --mlp --moco-t 0.2 --cos --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 ../data/imagenet

# train the Dorsal model
python main_moco.py -a resnet50 --model dorsal --lr 0.03 --batch-size 256 --mlp --moco-t 0.2 --cos --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 ../data/imagenet

# train the Default(x3) model
python main_moco.py -a resnet50w3 --model default --moco-dim 384 --lr 0.03 --batch-size 256 --mlp --moco-t 0.2 --cos --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 ../data/imagenet

This will train the models for 200 epochs and save checkpoints. When training has completed, the final model checkpoint, e.g. default_00199.pth.tar, should be moved to ssl-invariances/models/default.pth.tarfor use in evaluation in the below code.

The rest of this codebase assumes these final model checkpoints are located in a directory called ssl-invariances/models/ as shown below.

ssl-invariances/
    models/
        default.pth.tar
        default_w3.pth.tar
        dorsal.pth.tar
        ventral.pth.tar

Synthetic invariance

To evaluate the Default model on grayscale invariance, run:

python eval_synthetic_invariance.py --model default --transform grayscale ../data/imagenet

This will compute the mean and covariance of the model's feature space and save these statistics in the results/ directory. These are then used to speed up future invariance computations for the same model.

Real-world invariance

To evaluate the Ventral model on COIL100 viewpoint invariance, run:

python eval_realworld_invariance.py --model ventral --dataset COIL100

Extrinsic invariance on Causal3DIdent

To evaluate the Dorsal model on Causal3DIdent object x position prediction, run:

python eval_causal3dident.py --model dorsal --target 0

Downstream performance

To evaluate the combined Def+Ven+Dor model on 300W facial landmark regression, run:

python eval_downstream.py --model default+ventral+dorsal --dataset 300w

Citation

If you find our work useful for your research, please consider citing our paper:

@misc{ericsson2021selfsupervised,
      title={Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks}, 
      author={Linus Ericsson and Henry Gouk and Timothy M. Hospedales},
      year={2021},
      eprint={2111.11398},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

If you have any questions, feel welcome to create an issue or contact Linus Ericsson ([email protected]).

Owner
Linus Ericsson
PhD student in the Data Science CDT at The University of Edinburgh
Linus Ericsson
Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala, S. Krastanov, M. Eichenfield, and D. R. Englund, 2022

Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala,

Stefan Krastanov 1 Jan 17, 2022
Denoising images with Fourier Ring Correlation loss

Denoising images with Fourier Ring Correlation loss The python code accompanies the working manuscript Image quality measurements and denoising using

2 Mar 12, 2022
Planar Prior Assisted PatchMatch Multi-View Stereo

ACMP [News] The code for ACMH is released!!! [News] The code for ACMM is released!!! About This repository contains the code for the paper Planar Prio

Qingshan Xu 127 Dec 31, 2022
We present a regularized self-labeling approach to improve the generalization and robustness properties of fine-tuning.

Overview This repository provides the implementation for the paper "Improved Regularization and Robustness for Fine-tuning in Neural Networks", which

NEU-StatsML-Research 21 Sep 08, 2022
Attention mechanism with MNIST dataset

[TensorFlow] Attention mechanism with MNIST dataset Usage $ python run.py Result Training Loss graph. Test Each figure shows input digit, attention ma

YeongHyeon Park 12 Jun 10, 2022
AutoVideo: An Automated Video Action Recognition System

AutoVideo is a system for automated video analysis. It is developed based on D3M infrastructure, which describes machine learning with generic pipeline languages. Currently, it focuses on video actio

Data Analytics Lab at Texas A&M University 267 Dec 17, 2022
Simple reference implementation of GraphSAGE.

Reference PyTorch GraphSAGE Implementation Author: William L. Hamilton Basic reference PyTorch implementation of GraphSAGE. This reference implementat

William L Hamilton 861 Jan 06, 2023
code for "Feature Importance-aware Transferable Adversarial Attacks"

Feature Importance-aware Attack(FIA) This repository contains the code for the paper: Feature Importance-aware Transferable Adversarial Attacks (ICCV

Hengchang Guo 44 Nov 24, 2022
Aircraft design optimization made fast through modern automatic differentiation

Aircraft design optimization made fast through modern automatic differentiation. Plug-and-play analysis tools for aerodynamics, propulsion, structures, trajectory design, and much more.

Peter Sharpe 394 Dec 23, 2022
Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"

Topographic Variational Autoencoder Paper: https://arxiv.org/abs/2109.01394 Getting Started Install requirements with Anaconda: conda env create -f en

T. Andy Keller 69 Dec 12, 2022
This tool converts a Nondeterministic Finite Automata (NFA) into a Deterministic Finite Automata (DFA)

This tool converts a Nondeterministic Finite Automata (NFA) into a Deterministic Finite Automata (DFA)

Quinn Herden 1 Feb 04, 2022
(NeurIPS 2021) Realistic Evaluation of Transductive Few-Shot Learning

Realistic evaluation of transductive few-shot learning Introduction This repo contains the code for our NeurIPS 2021 submitted paper "Realistic evalua

Olivier Veilleux 14 Dec 13, 2022
Temporal Knowledge Graph Reasoning Triggered by Memories

MTDM Temporal Knowledge Graph Reasoning Triggered by Memories To alleviate the time dependence, we propose a memory-triggered decision-making (MTDM) n

4 Sep 25, 2022
Repository for the Bias Benchmark for QA dataset.

BBQ Repository for the Bias Benchmark for QA dataset. Authors: Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Tho

ML² AT CILVR 18 Nov 18, 2022
Real-world Anomaly Detection in Surveillance Videos- pytorch Re-implementation

Real world Anomaly Detection in Surveillance Videos : Pytorch RE-Implementation This repository is a re-implementation of "Real-world Anomaly Detectio

seominseok 62 Dec 08, 2022
Code for our paper "Graph Pre-training for AMR Parsing and Generation" in ACL2022

AMRBART An implementation for ACL2022 paper "Graph Pre-training for AMR Parsing and Generation". You may find our paper here (Arxiv). Requirements pyt

xfbai 60 Jan 03, 2023
Weakly Supervised Text-to-SQL Parsing through Question Decomposition

Weakly Supervised Text-to-SQL Parsing through Question Decomposition The official repository for the paper "Weakly Supervised Text-to-SQL Parsing thro

14 Dec 19, 2022
The Noise Contrastive Estimation for softmax output written in Pytorch

An NCE implementation in pytorch About NCE Noise Contrastive Estimation (NCE) is an approximation method that is used to work around the huge computat

Kaiyu Shi 287 Nov 25, 2022
StarGAN v2-Tensorflow - Simple Tensorflow implementation of StarGAN v2

Official Tensorflow implementation Open ! - Clova AI StarGAN v2 — Un-official TensorFlow Implementation [Paper] [Pytorch] : Diverse Image Synthesis f

Junho Kim 110 Jul 02, 2022
Differentiable rasterization applied to 3D model simplification tasks

nvdiffmodeling Differentiable rasterization applied to 3D model simplification tasks, as described in the paper: Appearance-Driven Automatic 3D Model

NVIDIA Research Projects 336 Dec 30, 2022