Generic image compressor for machine learning. Pytorch code for our paper "Lossy compression for lossless prediction".

Overview

Lossy Compression for Lossless Prediction License: MIT Python 3.8+

Using: Using

Training: Training

This repostiory contains our implementation of the paper: Lossy Compression for Lossless Prediction. That formalizes and empirically inverstigates unsupervised training for task-specific compressors.

Using the compressor

Using

If you want to use our compressor directly the easiest is to use the model from torch hub as seen in the google colab (or notebooks/Hub.ipynb) or th example below.

Installation details
pip install torch torchvision tqdm numpy compressai sklearn git+https://github.com/openai/CLIP.git

Using pytorch>1.7.1 : CLIP forces pytorch version 1.7.1, this is because it needs this version to use JIT. If you don't need JIT (no JIT by default) you can alctually use more recent versions of torch and torchvision pip install -U torch torchvision. Make sure to update after having isntalled CLIP.


import time

import torch
from sklearn.svm import LinearSVC
from torchvision.datasets import STL10

DATA_DIR = "data/"

# list available compressors. b01 compresses the most (b01 > b005 > b001)
torch.hub.list('YannDubs/lossyless:main') 
# ['clip_compressor_b001', 'clip_compressor_b005', 'clip_compressor_b01']

# Load the desired compressor and transformation to apply to images (by default on GPU if available)
compressor, transform = torch.hub.load('YannDubs/lossyless:main','clip_compressor_b005')

# Load some data to compress and apply transformation
stl10_train = STL10(
    DATA_DIR, download=True, split="train", transform=transform
)
stl10_test = STL10(
    DATA_DIR, download=True, split="test", transform=transform
)

# Compresses the datasets and save them to file (this requires GPU)
# Rate: 1506.50 bits/img | Encoding: 347.82 img/sec
compressor.compress_dataset(
    stl10_train,
    f"{DATA_DIR}/stl10_train_Z.bin",
    label_file=f"{DATA_DIR}/stl10_train_Y.npy",
)
compressor.compress_dataset(
    stl10_test,
    f"{DATA_DIR}/stl10_test_Z.bin",
    label_file=f"{DATA_DIR}/stl10_test_Y.npy",
)

# Load and decompress the datasets from file the datasets (does not require GPU)
# Decoding: 1062.38 img/sec
Z_train, Y_train = compressor.decompress_dataset(
    f"{DATA_DIR}/stl10_train_Z.bin", label_file=f"{DATA_DIR}/stl10_train_Y.npy"
)
Z_test, Y_test = compressor.decompress_dataset(
    f"{DATA_DIR}/stl10_test_Z.bin", label_file=f"{DATA_DIR}/stl10_test_Y.npy"
)

# Downstream STL10 evaluation. Accuracy: 98.65% | Training time: 0.5 sec
clf = LinearSVC(C=7e-3)
start = time.time()
clf.fit(Z_train, Y_train)
delta_time = time.time() - start
acc = clf.score(Z_test, Y_test)
print(
    f"Downstream STL10 accuracy: {acc*100:.2f}%.  \t Training time: {delta_time:.1f} "
)

Minimal training code

Training

If your goal is to look at a minimal version of the code to simply understand what is going on, I would highly recommend starting from notebooks/minimal_compressor.ipynb (or google colab link above). This is a notebook version of the code provided in Appendix E.7. of the paper, to quickly train and evaluate our compressor.

Installation details
  1. pip install git+https://github.com/openai/CLIP.git
  2. pip uninstall -y torchtext (probably not necessary but can cause issues if got installed as wrong pytorch version)
  3. pip install scikit-learn==0.24.2 lightning-bolts==0.3.4 compressai==1.1.5 pytorch-lightning==1.3.8

Using pytorch>1.7.1 : CLIP forces pytorch version 1.7.1 you should be able to use a more recent versions. E.g.:

  1. pip install git+https://github.com/openai/CLIP.git
  2. pip install -U torch torchvision scikit-learn lightning-bolts compressai pytorch-lightning

Results from the paper

We provide scripts to essentially replicate some results from the paper. The exact results will be a little different as we simplified and cleaned some of the code to help readability. All scripts can be found in bin and run using the command bin/*/<experiment>.sh.

Installation details
  1. Clone repository
  2. Install PyTorch >= 1.7
  3. pip install -r requirements.txt

Other installation

  • For the bare minimum packages: use pip install -r requirements_mini.txt instead.
  • For conda: use conda env update --file requirements/environment.yaml.
  • For docker: we provide a dockerfile at requirements/Dockerfile.

Notes

  • CLIP forces pytorch version 1.7.1, this is because it needs this version to use JIT. We don't use JIT so you can alctually use more recent versions of torch and torchvision pip install -U torch torchvision.
  • For better logging: hydra and pytorch lightning logging don't work great together, to have a better logging experience you should comment out the folowing lines in pytorch_lightning/__init__.py :
if not _root_logger.hasHandlers():
     _logger.addHandler(logging.StreamHandler())
     _logger.propagate = False

Test installation

To test your installation and that everything works as desired you can run bin/test.sh, which will run an epoch of BICNE and VIC on MNIST.


Scripts details

All scripts can be found in bin and run using the command bin/*/<experiment>.sh. This will save all results, checkpoints, logs... The most important results (including summary resutls and figures) will be saved at results/exp_<experiment>. Most important are the summarized metrics results/exp_<experiment>*/summarized_metrics_merged.csv and any figures results/exp_<experiment>*/*.png.

The key experiments that that do not require very large compute are:

  • VIC/VAE on rotation invariant Banana distribution: bin/banana/banana_viz_VIC.sh
  • VIC/VAE on augmentation invariant MNIST: bin/mnist/augmist_viz_VIC.sh
  • CLIP experiments: bin/clip/main_linear.sh

By default all scripts will log results on weights and biases. If you have an account (or make one) you should set your username in conf/user.yaml after wandb_entity:, the passwod should be set directly in your environment variables. If you prefer not logging, you can use the command bin/*/<experiment>.sh -a logger=csv which changes (-a is for append) the default wandb logger to a csv logger.

Generally speaking you can change any of the parameters either directly in conf/**/<file>.yaml or by adding -a to the script. We are using Hydra to manage our configurations, refer to their documentation if something is unclear.

If you are using Slurm you can submit directly the script on servers by adding a config file under conf/slurm/<myserver>.yaml, and then running the script as bin/*/<experiment>.sh -s <myserver>. For example configurations files for slurm see conf/slurm/vector.yaml or conf/slurm/learnfair.yaml. For more information check the documentation from submitit's plugin which we are using.


VIC/VAE on rotation invariant Banana

Command:

bin/banana/banana_viz_VIC.sh

The following figures are saved automatically at results/exp_banana_viz_VIC/**/quantization.png. On the left we see the quantization of the Banana distribution by a standard compressor (called VAE in code but VC in paper). On the right, by our (rotation) invariant compressor (VIC).

Standard compression of Banana Invariant compression of Banana

VIC/VAE on augmentend MNIST

Command:

bin/banana/augmnist_viz_VIC.sh

The following figure is saved automatically at results/exp_augmnist_viz_VIC/**/rec_imgs.png. It shows source augmented MNIST images as well as the reconstructions using our invariant compressor.

Invariant compression of augmented MNIST

CLIP compressor

Command:

bin/clip/main_small.sh

The following table comes directly from the results which are automatically saved at results/exp_clip_bottleneck_linear_eval/**/datapred_*/**/results_predictor.csv. It shows the result of compression from our CLIP compressor on many datasets.

Cars196 STL10 Caltech101 Food101 PCam Pets37 CIFAR10 CIFAR100
Rate [bits] 1471 1342 1340 1266 1491 1209 1407 1413
Test Acc. [%] 80.3 98.5 93.3 83.8 81.1 88.8 94.6 79.0

Note: ImageNet is too large for training a SVM using SKlearn. You need to run MLP evaluation with bin/clip/clip_bottleneck_mlp_eval. Also you have to download ImageNet manually.

Cite

You can read the full paper here. Please cite our paper if you use our model:

@inproceedings{
    dubois2021lossy,
    title={Lossy Compression for Lossless Prediction},
    author={Yann Dubois and Benjamin Bloem-Reddy and Karen Ullrich and Chris J. Maddison},
    booktitle={Neural Compression: From Information Theory to Applications -- Workshop @ ICLR 2021},
    year={2021},
    url={https://arxiv.org/abs/2106.10800}
}
You might also like...
PyTorch code for our ECCV 2018 paper
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Open-source code for Generic Grouping Network (GGN, CVPR 2022)
Open-source code for Generic Grouping Network (GGN, CVPR 2022)

Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity Pytorch implementation for "Open-World Instance Segmen

Official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels".

WarPI The official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels". Run python main.py --corruption_type

Generic Event Boundary Detection: A Benchmark for Event Segmentation

Generic Event Boundary Detection: A Benchmark for Event Segmentation We release our data annotation & baseline codes for detecting generic event bound

The Generic Manipulation Driver Package - Implements a ROS Interface over the robotics toolbox for Python
The Generic Manipulation Driver Package - Implements a ROS Interface over the robotics toolbox for Python

Armer Driver Armer aims to provide an interface layer between the hardware drivers of a robotic arm giving the user control in several ways: Joint vel

Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency

Image Crop Analysis This is a repo for the code used for reproducing our Image Crop Analysis paper as shared on our blog post. If you plan to use this

Source code of our TTH paper: Targeted Trojan-Horse Attacks on Language-based Image Retrieval.
Source code of our TTH paper: Targeted Trojan-Horse Attacks on Language-based Image Retrieval.

Targeted Trojan-Horse Attacks on Language-based Image Retrieval Source code of our TTH paper: Targeted Trojan-Horse Attacks on Language-based Image Re

Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Comments
  • Karen's experiments

    Karen's experiments

    Changes:

    • val_equivalence flag allows to have different equivalences at test time -> if used will automatically set is_augment_val=True
    • adding the option of having joint augmentations (specific. rotation)
    opened by KarenUllrich 2
  • Ever Use a Projection Head?

    Ever Use a Projection Head?

    Hi Yann,

    Did you ever use a project head [1] (i.e., a multi-layer perceptron) to transform the output of the encoder?

    If I understand correctly, you directly feed the output of the encoder (e.g., a pre-trained ResNet model) into the rate estimator?

    Thanks!

    Reference:

    [1] Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020, November). A simple framework for contrastive learning of visual representations. In International conference on machine learning (pp. 1597-1607). PMLR.

    opened by DarrenZhang01 1
  • Efficient way to integrate lossyless into a PyTorch Dataset subclass

    Efficient way to integrate lossyless into a PyTorch Dataset subclass

    Hey @YannDubs,

    I recently discovered your paper and find the idea very interesting. Therefore, I would like to integrate lossyless into a project I am currently working on. However, there are two requirements/presuppositions in my project that your compressor on PyTorch Hub does not cover as far as I understand it:

    • I assume that the training data do not fit into memory so I cannot decompress the entire dataset at once.
    • Because I cannot load the entire data into memory and shuffle them there, I need access to individual samples of the dataset (for random permutations) without touching the rest of the data (or as little as possible).

    Basically, I would like to integrate lossyless into a subclass of PyTorch's Dataset that implements the __getitem__(index) interface. Before I start experimenting on my own and potentially overlook something that you already thought about, I wanted to ask you if you already considered approaches how to integrate your idea into a PyTorch Dataset.

    Looking forward to a discussion!

    opened by lbhm 5
Owner
Yann Dubois
ML research
Yann Dubois
The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization [Paper] accepted at the EMNLP 2021: Vision Guided Genera

CAiRE 42 Jan 07, 2023
Siamese TabNet

Raifhack-DS-2021 https://raifhack.ru/ - Команда Звёздочка Siamese TabNet Сиамская TabNet предсказывает стоимость объекта недвижимости с price_type=1,

Daniel Gafni 15 Apr 16, 2022
This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT).

Dynamic-Vision-Transformer (Pytorch) This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT). Not All Ima

210 Dec 18, 2022
SenseNet is a sensorimotor and touch simulator for deep reinforcement learning research

SenseNet is a sensorimotor and touch simulator for deep reinforcement learning research

59 Feb 25, 2022
TCPNet - Temporal-attentive-Covariance-Pooling-Networks-for-Video-Recognition

Temporal-attentive-Covariance-Pooling-Networks-for-Video-Recognition This is an implementation of TCPNet. Introduction For video recognition task, a g

Zilin Gao 21 Dec 08, 2022
Python Actor concurrency library

Thespian Actor Library This library provides the framework of an Actor model for use by applications implementing Actors. Thespian Site with Documenta

Kevin Quick 177 Dec 11, 2022
Tensorflow implementation of "Learning Deep Features for Discriminative Localization"

Weakly_detector Tensorflow implementation of "Learning Deep Features for Discriminative Localization" B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and

Taeksoo Kim 363 Jun 29, 2022
Deploy a ML inference service on a budget in less than 10 lines of code.

BudgetML is perfect for practitioners who would like to quickly deploy their models to an endpoint, but not waste a lot of time, money, and effort trying to figure out how to do this end-to-end.

1.3k Dec 25, 2022
DIT is a DTLS MitM proxy implemented in Python 3. It can intercept, manipulate and suppress datagrams between two DTLS endpoints and supports psk-based and certificate-based authentication schemes (RSA + ECC).

DIT - DTLS Interception Tool DIT is a MitM proxy tool to intercept DTLS traffic. It can intercept, manipulate and/or suppress DTLS datagrams between t

52 Nov 30, 2022
PyBrain - Another Python Machine Learning Library.

PyBrain -- the Python Machine Learning Library =============================================== INSTALLATION ------------ Quick answer: make sure you

2.8k Dec 31, 2022
A Confidence-based Iterative Solver of Depths and Surface Normals for Deep Multi-view Stereo

idn-solver Paper | Project Page This repository contains the code release of our ICCV 2021 paper: A Confidence-based Iterative Solver of Depths and Su

zhaowang 43 Nov 17, 2022
Project page for End-to-end Recovery of Human Shape and Pose

End-to-end Recovery of Human Shape and Pose Angjoo Kanazawa, Michael J. Black, David W. Jacobs, Jitendra Malik CVPR 2018 Project Page Requirements Pyt

1.4k Dec 29, 2022
Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training Consistency Shift (ICCV 2021)

Π-NAS This repository provides the evaluation code of our submitted paper: Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training

Jiqi Zhang 18 Aug 18, 2022
A GOOD REPRESENTATION DETECTS NOISY LABELS

A GOOD REPRESENTATION DETECTS NOISY LABELS This code is a PyTorch implementation of the paper: Prerequisites Python 3.6.9 PyTorch 1.7.1 Torchvision 0.

<a href=[email protected]"> 64 Jan 04, 2023
Combinatorially Hard Games where the levels are procedurally generated

puzzlegen Implementation of two procedurally simulated environments with gym interfaces. IceSlider: the agent needs to reach and stop on the pink squa

Autonomous Learning Group 3 Jun 26, 2022
PyTorch Implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedding (ORAL, MICCAIW 2021)

Small Lesion Segmentation in Brain MRIs with Subpixel Embedding PyTorch implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedd

22 Oct 21, 2022
This repo is a C++ version of yolov5_deepsort_tensorrt. Packing all C++ programs into .so files, using Python script to call C++ programs further.

yolov5_deepsort_tensorrt_cpp Introduction This repo is a C++ version of yolov5_deepsort_tensorrt. And packing all C++ programs into .so files, using P

41 Dec 27, 2022
Locally Differentially Private Distributed Deep Learning via Knowledge Distillation (LDP-DL)

Locally Differentially Private Distributed Deep Learning via Knowledge Distillation (LDP-DL) A preprint version of our paper: Link here This is a samp

Di Zhuang 3 Jan 08, 2023
Scripts and outputs related to the paper Prediction of Adverse Biological Effects of Chemicals Using Knowledge Graph Embeddings.

Knowledge Graph Embeddings and Chemical Effect Prediction, 2020. Scripts and outputs related to the paper Prediction of Adverse Biological Effects of

Knowledge Graphs at the Norwegian Institute for Water Research 1 Nov 01, 2021
In this project, we develop a face recognize platform based on MTCNN object-detection netcwork and FaceNet self-supervised network.

模式识别大作业——人脸检测与识别平台 本项目是一个简易的人脸检测识别平台,提供了人脸信息录入和人脸识别的功能。前端采用 html+css+js,后端采用 pytorch,

Xuhua Huang 5 Aug 02, 2022