PAWS 🐾 Predicting View-Assignments with Support Samples

Related tags

Deep Learningsuncet
Overview

PAWS 🐾 Predicting View-Assignments with Support Samples

This repo provides a PyTorch implementation of PAWS (predicting view assignments with support samples), as described in the paper Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples.

CD21_260_SWAV2_PAWS_Flowchart_FINAL

PAWS is a method for semi-supervised learning that builds on the principles of self-supervised distance-metric learning. PAWS pre-trains a model to minimize a consistency loss, which ensures that different views of the same unlabeled image are assigned similar pseudo-labels. The pseudo-labels are generated non-parametrically, by comparing the representations of the image views to those of a set of randomly sampled labeled images. The distance between the view representations and labeled representations is used to provide a weighting over class labels, which we interpret as a soft pseudo-label. By non-parametrically incorporating labeled samples in this way, PAWS extends the distance-metric loss used in self-supervised methods such as BYOL and SwAV to the semi-supervised setting.

Also provided in this repo is a PyTorch implementation of the semi-supervised SimCLR+CT method described in the paper Supervision Accelerates Pretraining in Contrastive Semi-Supervised Learning of Visual Representations. SimCLR+CT combines the SimCLR self-supervised loss with the SuNCEt (supervised noise contrastive estimation) loss for semi-supervised learning.

Pretrained models

We provide the full checkpoints for the PAWS pre-trained models, both with and without fine-tuning. The full checkpoints for the pretrained models contain the backbone, projection head, and prediction head weights. The finetuned model checkpoints, on the other hand, only include the backbone and linear classifier head weights. Top-1 classification accuracy for the pretrained models is reported using a nearest neighbour classifier. Top-1 classification accuracy for the finetuned models is reported using the class labels predicted by the network's last linear layer.

1% labels 10% labels
epochs network pretrained (NN) finetuned pretrained (NN) finetuned
300 RN50 65.4% 66.5% 73.1% 75.5%
200 RN50 64.6% 66.1% 71.9% 75.0%
100 RN50 62.6% 63.8% 71.0% 73.9%

Running PAWS semi-supervised pre-training and fine-tuning

Config files

All experiment parameters are specified in config files (as opposed to command-line-arguments). Config files make it easier to keep track of different experiments, as well as launch batches of jobs at a time. See the configs/ directory for example config files.

Requirements

  • Python 3.8
  • PyTorch install 1.7.1
  • torchvision
  • CUDA 11.0
  • Apex with CUDA extension
  • Other dependencies: PyYaml, numpy, opencv, submitit

Labeled Training Splits

For reproducibilty, we have pre-specified the labeled training images as .txt files in the imagenet_subsets/ and cifar10_subsets/ directories. Based on your specifications in your experiment's config file, our implementation will automatically use the images specified in one of these .txt files as the set of labeled images. On ImageNet, if you happen to request a split of the data that is not contained in imagenet_subsets/ (for example, if you set unlabeled_frac !=0.9 and unlabeled_frac != 0.99, i.e., not 10% labeled or 1% labeled settings), then the code will independently flip a coin at the start of training for each training image with probability 1-unlabeled_frac to determine whether or not to keep the image's label.

Single-GPU training

PAWS is very simple to implement and experiment with. Our implementation starts from the main.py, which parses the experiment config file and runs the desired script (e.g., paws pre-training or fine-tuning) locally on a single GPU.

CIFAR10 pre-training

For example, to pre-train with PAWS on CIFAR10 locally, using a single GPU using the pre-training experiment configs specificed inside configs/paws/cifar10_train.yaml, run:

python main.py
  --sel paws_train
  --fname configs/paws/cifar10_train.yaml

CIFAR10 evaluation

To fine-tune the pre-trained model for a few optimization steps with the SuNCEt (supervised noise contrastive estimation) loss on a single GPU using the pre-training experiment configs specificed inside configs/paws/cifar10_snn.yaml, run:

python main.py
  --sel snn_fine_tune
  --fname configs/paws/cifar10_snn.yaml

To then evaluate the nearest-neighbours performance of the model, locally, on a single GPU, run:

python snn_eval.py
  --model-name wide_resnet28w2 --use-pred
  --pretrained $path_to_pretrained_model
  --unlabeled_frac $1.-fraction_of_labeled_train_data_to_support_nearest_neighbour_classification
  --root-path $path_to_root_datasets_directory
  --image-folder $image_directory_inside_root_path
  --dataset-name cifar10_fine_tune
  --split-seed $which_prespecified_seed_to_split_labeled_data

Multi-GPU training

Running PAWS across multiple GPUs on a cluster is also very simple. In the multi-GPU setting, the implementation starts from main_distributed.py, which, in addition to parsing the config file and launching the desired script, also allows for specifying details about distributed training. For distributed training, we use the popular open-source submitit tool and provide examples for a SLURM cluster, but feel free to edit main_distributed.py for your purposes to specify a different approach to launching a multi-GPU job on a cluster.

ImageNet pre-training

For example, to pre-train with PAWS on 64 GPUs using the pre-training experiment configs specificed inside configs/paws/imgnt_train.yaml, run:

python main_distributed.py
  --sel paws_train
  --fname configs/paws/imgnt_train.yaml
  --partition $slurm_partition
  --nodes 8 --tasks-per-node 8
  --time 1000
  --device volta16gb

ImageNet fine-tuning

To fine-tune a pre-trained model on 4 GPUs using the fine-tuning experiment configs specified inside configs/paws/fine_tune.yaml, run:

python main_distributed.py
  --sel fine_tune
  --fname configs/paws/fine_tune.yaml
  --partition $slurm_partition
  --nodes 1 --tasks-per-node 4
  --time 1000
  --device volta16gb

To evaluate the fine-tuned model locally on a single GPU, use the same config file, configs/paws/fine_tune.yaml, but change training: true to training: false. Then run:

python main.py
  --sel fine_tune
  --fname configs/paws/fine_tune.yaml

Soft Nearest Neighbours evaluation

To evaluate the nearest-neighbours performance of a pre-trained ResNet50 model on a single GPU, run:

python snn_eval.py
  --model-name resnet50 --use-pred
  --pretrained $path_to_pretrained_model
  --unlabeled_frac $1.-fraction_of_labeled_train_data_to_support_nearest_neighbour_classification
  --root-path $path_to_root_datasets_directory
  --image-folder $image_directory_inside_root_path
  --dataset-name $one_of:[imagenet_fine_tune, cifar10_fine_tune]

License

See the LICENSE file for details about the license under which this code is made available.

Citation

If you find this repository useful in your research, please consider giving a star and a citation 🐾

@article{assran2021semisupervised,
  title={Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples}, 
  author={Assran, Mahmoud, and Caron, Mathilde, and Misra, Ishan, and Bojanowski, Piotr and Joulin, Armand, and Ballas, Nicolas, and Rabbat, Michael},
  journal={arXiv preprint arXiv:2104.13963},
  year={2021}
}
@article{assran2020supervision,
  title={Supervision Accelerates Pretraining in Contrastive Semi-Supervised Learning of Visual Representations},
  author={Assran, Mahmoud, and Ballas, Nicolas, and Castrejon, Lluis, and Rabbat, Michael},
  journal={arXiv preprint arXiv:2006.10803},
  year={2020}
}
Owner
Facebook Research
Facebook Research
Official PyTorch implementation of the paper "Graph-based Generative Face Anonymisation with Pose Preservation" in ICIAP 2021

Contents AnonyGAN Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Evaluation Acknowledgments Citat

Nicola Dall'Asen 10 May 24, 2022
The code release of paper 'Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization' NIPS 2020.

Domain Generalization for Medical Imaging Classification with Linear Dependency Regularization The code release of paper 'Domain Generalization for Me

Yufei Wang 56 Dec 28, 2022
Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery (ICCV 2021 Oral) Run this model on Replicate Optimization: Global directions: Mapper: Check ou

3.3k Jan 05, 2023
Official implementation of Pixel-Level Bijective Matching for Video Object Segmentation

BMVOS This is the official implementation of Pixel-Level Bijective Matching for Video Object Segmentation, to appear in WACV 2022. @article{cho2021pix

Suhwan Cho 13 Dec 14, 2022
Convert human motion from video to .bvh

video_to_bvh Convert human motion from video to .bvh with Google Colab Usage 1. Open video_to_bvh.ipynb in Google Colab Go to https://colab.research.g

Dene 306 Dec 10, 2022
Zeyuan Chen, Yangchao Wang, Yang Yang and Dong Liu.

Principled S2R Dehazing This repository contains the official implementation for PSD Framework introduced in the following paper: PSD: Principled Synt

zychen 78 Dec 30, 2022
Data pipelines for both TensorFlow and PyTorch!

rapidnlp-datasets Data pipelines for both TensorFlow and PyTorch ! If you want to load public datasets, try: tensorflow/datasets huggingface/datasets

1 Dec 08, 2021
This is the official PyTorch implementation of our paper: "Artistic Style Transfer with Internal-external Learning and Contrastive Learning".

Artistic Style Transfer with Internal-external Learning and Contrastive Learning This is the official PyTorch implementation of our paper: "Artistic S

51 Dec 20, 2022
A modification of Daniel Russell's notebook merged with Katherine Crowson's hq-skip-net changes

Edits made to this repo by Katherine Crowson I have added several features to this repository for use in creating higher quality generative art (featu

Paul Fishwick 10 May 07, 2022
Image-Scaling Attacks and Defenses

Image-Scaling Attacks & Defenses This repository belongs to our publication: Erwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck. Ad

Erwin Quiring 163 Nov 21, 2022
Implementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention

E(n)-Equivariant Transformer (wip) Implementation of E(n)-Equivariant Transformer, which extends the ideas from Welling's E(n)-Equivariant G

Phil Wang 132 Jan 02, 2023
An implementation of IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification

IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification The repostiory consists of the code, results and data set links for

12 Dec 26, 2022
Real-time multi-object tracker using YOLO v5 and deep sort

This repository contains a two-stage-tracker. The detections generated by YOLOv5, a family of object detection architectures and models pretrained on the COCO dataset, are passed to a Deep Sort algor

Mike 3.6k Jan 05, 2023
Pytorch implementation of "Geometrically Adaptive Dictionary Attack on Face Recognition" (WACV 2022)

Geometrically Adaptive Dictionary Attack on Face Recognition This is the Pytorch code of our paper "Geometrically Adaptive Dictionary Attack on Face R

6 Nov 21, 2022
Official Implementation of VAT

Semantic correspondence Few-shot segmentation Cost Aggregation Is All You Need for Few-Shot Segmentation For more information, check out project [Proj

Hamacojr 114 Dec 27, 2022
A graph adversarial learning toolbox based on PyTorch and DGL.

GraphWar: Arms Race in Graph Adversarial Learning NOTE: GraphWar is still in the early stages and the API will likely continue to change. 🚀 Installat

Jintang Li 54 Jan 05, 2023
Awesome-AI-books - Some awesome AI related books and pdfs for learning and downloading

Awesome AI books Some awesome AI related books and pdfs for downloading and learning. Preface This repo only used for learning, do not use in business

luckyzhou 1k Jan 01, 2023
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training Code for NeurIPS 2021 paper "Better Safe Than Sorry: Preventing Delu

Lue Tao 29 Sep 20, 2022
Visual Adversarial Imitation Learning using Variational Models (VMAIL)

Visual Adversarial Imitation Learning using Variational Models (VMAIL) This is the official implementation of the NeurIPS 2021 paper. Project website

14 Nov 18, 2022
(AAAI2022) Style Mixing and Patchwise Prototypical Matching for One-Shot Unsupervised Domain Adaptive Semantic Segmentation

SM-PPM This is a Pytorch implementation of our paper "Style Mixing and Patchwise Prototypical Matching for One-Shot Unsupervised Domain Adaptive Seman

W-zx-Y 10 Dec 07, 2022