Reproducing-BowNet: Learning Representations by Predicting Bags of Visual Words

Overview

Reproducing-BowNet

Our reproducibility effort based on the 2020 ML Reproducibility Challenge. We are reproducing the results of this CVPR 2020 paper: Learning Representations by Predicting Bags of Visual Words by Gidaris et al S. Gidaris, A. Bursuc, N. Komodakis, P. Pérez, and M. Cord, “Learning Representations by Predicting Bags of Visual Words,” ArXiv, 27-Feb-2020. [Online]. Available: https://arxiv.org/abs/2002.12247. [Accessed: 15-Nov-2020].

Group project for UWaterloo course SYDE 671 - Advanced Image Processing by Harry Nguyen, Stone Yun, Hisham Mohammad

Code base is implemented with PyTorch. Dataloader is adapted from Github released by authors of the RotNet paper: https://github.com/gidariss/FeatureLearningRotNet

Our model definitions are in model.py. Custom loss and layer class definitions are in layers.py

See dependencies.txt for list of libraries that need to be installed. Pip install or conda install both work

Before running the experiments:

Inside the project code, create a folder ./datasets/CIFAR, download the dataset CIFAR100 from https://www.cs.toronto.edu/~kriz/cifar.html and put in the folder.

For running the code:

Pretrained weights of BowNet and RotNet from our best results are in saved_weights directory. To generate your own RotNet checkpoint, running rotation_prediction_training.py will train a new RotNet from scratch. The checkpoint is saved as rotnet1_checkpoint.pt

To run rotnet_linearclf.py or rotnet_nonlinearclf.py, you need to have the checkpoint file of pretrained RotNet, download here (eg. saved_weights/rotnet.pt). These scripts load the pretrained RotNet and use its feature maps to train a classifier on CIFAR-100 prediction.

$python rotnet_linearclf.py --checkpoint /path/to/checkpoint

$python rotnet_nonlinearclf.py --checkpoint /path/to/checkpoint

bownet_plus_linearclf_cifar_training.py takes pretrained BowNet and uses feature maps to train linear classifier on CIFAR-100. kmeans_cluster_and_bownet_training.py loads pretrained RotNet, performs KMeans clustering of feature map, then trains BowNet on BOW reconstruction. Thus, you'll need pretrained BowNet and RotNet checkpoints respectively.

We also include a pre-computed RotNet codebook for K = 2048 clusters. If you include the path to it for kmeans_cluster_and_bownet_training.py the script will skip the codebook generation step and go straight to BOW reconstruction training

$python bownet_plus_linearclf_cifar_training.py --checkpoint /path/to/bownet/checkpoint

$python kmeans_cluster_and_bownet_training.p --checkpoint /path/to/rotnet/checkpoint [optional: --rotnet_vocab /path/to/rotnet/vocab.npy]

PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021]

piglet PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021] This repo contains code and data for PIGLeT. If you like

Rowan Zellers 51 Oct 08, 2022
Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020.

RegNet Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020. Paper | Official Implementation RegNet offer a very

Vishal R 2 Feb 11, 2022
PyTorch - Python + Nim

Master Release Pytorch - Py + Nim A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen. Because Nim compiles to C+

Giovanni Petrantoni 425 Dec 22, 2022
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning

    VarCLR: Variable Representation Pre-training via Contrastive Learning New: Paper accepted by ICSE 2022. Preprint at arXiv! This repository contain

squaresLab 32 Oct 24, 2022
AI4Good project for detecting waste in the environment

Detect waste AI4Good project for detecting waste in environment. www.detectwaste.ml. Our latest results were published in Waste Management journal in

108 Dec 25, 2022
Denoising Normalizing Flow

Denoising Normalizing Flow Christian Horvat and Jean-Pascal Pfister 2021 We combine Normalizing Flows (NFs) and Denoising Auto Encoder (DAE) by introd

CHrvt 17 Oct 15, 2022
Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Tacotron 2 (without wavenet) PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions. This implementati

NVIDIA Corporation 4.1k Jan 03, 2023
4th place solution for the SIGIR 2021 challenge.

SIGIR-2021 (Tinkoff.AI) How to start Download train and test data: https://sigir-ecom.github.io/data-task.html Place it under sigir-2021/data/. Run py

Tinkoff.AI 4 Jul 01, 2022
The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

PRIMER The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization. PRIMER is a pre-trained model for mu

AI2 114 Jan 06, 2023
Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021 [Projec

Zhengqi Li 583 Dec 30, 2022
Setup and customize deep learning environment in seconds.

Deepo is a series of Docker images that allows you to quickly set up your deep learning research environment supports almost all commonly used deep le

Ming 6.3k Jan 06, 2023
Simple renderer for use with MuJoCo (>=2.1.2) Python Bindings.

Viewer for MuJoCo in Python Interactive renderer to use with the official Python bindings for MuJoCo. Starting with version 2.1.2, MuJoCo comes with n

Rohan P. Singh 62 Dec 30, 2022
VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech

VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech Jaehyeon Kim, Jungil Kong, and Juhee Son In our rece

Jaehyeon Kim 1.7k Jan 08, 2023
Trajectory Extraction of road users via Traffic Camera

Traffic Monitoring Citation The associated paper for this project will be published here as soon as possible. When using this software, please cite th

Julian Strosahl 14 Dec 17, 2022
The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient.

You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient (paper) @misc{zhang2021compress,

46 Dec 07, 2022
Airborne magnetic data of the Osborne Mine and Lightning Creek sill complex, Australia

Osborne Mine, Australia - Airborne total-field magnetic anomaly This is a section of a survey acquired in 1990 by the Queensland Government, Australia

Fatiando a Terra Datasets 1 Jan 21, 2022
This is the code related to "Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation" (ICCV 2021).

Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation This is the code relat

39 Sep 23, 2022
Pytorch port of Google Research's LEAF Audio paper

leaf-audio-pytorch Pytorch port of Google Research's LEAF Audio paper published at ICLR 2021. This port is not completely finished, but the Leaf() fro

Dennis Fedorishin 80 Oct 31, 2022
Meta Learning for Semi-Supervised Few-Shot Classification

few-shot-ssl-public Code for paper Meta-Learning for Semi-Supervised Few-Shot Classification. [arxiv] Dependencies cv2 numpy pandas python 2.7 / 3.5+

Mengye Ren 501 Jan 08, 2023
Semantic Image Synthesis with SPADE

Semantic Image Synthesis with SPADE New implementation available at imaginaire repository We have a reimplementation of the SPADE method that is more

NVIDIA Research Projects 7.3k Jan 07, 2023