A lossless neural compression framework built on top of JAX.

Overview

Kompressor

GitHub

Branch CI Coverage
main (active) Build codecov
main Build codecov
development Build codecov

A neural compression framework built on top of JAX.

Install

setup.py assumes a compatible version of JAX and JAXLib are already installed. Automated build is tested for a cuda:11.1-cudnn8-runtime-ubuntu20.04 environment with jaxlib==0.1.76+cuda11.cudnn82.

git clone https://github.com/rosalindfranklininstitute/kompressor.git
cd kompressor
pip install -e .

# Run tests
python -m pytest --cov=src/kompressor tests/

Install & Run through Docker environment

Docker image for the Kompressor dependencies are provided in the quay.io/rosalindfranklininstitute/kompressor:main Quay.io image.

# Run the container for the Kompressor environment
docker run --rm quay.io/rosalindfranklininstitute/kompressor:main \
    python -m pytest --cov=/usr/local/kompressor/src/kompressor /usr/local/kompressor/tests

Install & Run through Singularity environment

Singularity image for the Kompressor dependencies are provided in the rosalindfranklininstitute/kompressor/kompressor:main cloud.sylabs.io image.

singularity pull library://rosalindfranklininstitute/kompressor/kompressor:main
singularity run kompressor_main.sif \
    python -m pytest --cov=/usr/local/kompressor/src/kompressor /usr/local/kompressor/tests
Comments
  • Refactor map tuples to dicts

    Refactor map tuples to dicts

    Closes #14. Functions which currently return an ordered tuple of maps (lrmap, udmap, cmap, ...) now return keyed dictionaries { 'lrmap': lrmap, 'udmap': udmap, 'cmap': cmap, ... } so that order/usage is explicitly enforced.

    List comprehensions over the tuples now use jax.tree_map and jax.tree_multimap to ensure key safety.

    @GMW99, this will break the current implementation of the Metrics Callback class which iterates over a zip of the hardcoded map names and the maps tuple. This iteration can be replaced by iterating over maps.items() since it is now a dict already.

    enhancement 
    opened by JossWhittle 1
  • Ensure jax.jit static_argnums is refactored to static_argnames

    Ensure jax.jit static_argnums is refactored to static_argnames

    Functions that currently mark static_argnums=(0, 1, 2) should be updated to use the safer static_argnames=('tom', 'dick', 'harry') that is now available.

    enhancement high priority 
    opened by JossWhittle 1
  • Update development examples

    Update development examples

    • Splits docker image into JAX base image and Kompressor dependency and install image
    • JAX image installs JAX from source to ensure correct CUDA / CUDNN versions
    • Adjust setup.py to install dependencies from requirement.txt
    • Refactors a how submodules are imported (within the kom.image submodule. Need to check volumes matches)
    • Add kom.image.data submodule for dealing with tensorflow data pipelines
    • Fixed pooling in the total variation losses (used as metrics in the example notebooks)
    • Move all the encoding/decoding functions for the maps into a kom.mapping submodule
    • Add within-k and run-length metrics to kom.image.metrics for example notebooks
    • Added example notebooks for interacting with the maps and training a basic Haiku compression model
    feature 
    opened by JossWhittle 0
  • Add mapping encode/decode functions for float32 data

    Add mapping encode/decode functions for float32 data

    Will need a bit of thinking to get right. We probably need to consider similar tricks that we used for applying Radix Sort on float32 data to make the compression numerically stable and portable between machines.

    enhancement low priority 
    opened by JossWhittle 0
  • Add mapping encode/decode functions for uint32 data

    Add mapping encode/decode functions for uint32 data

    Some of our data is uint32 volumes.

    Will need to trace through the full compression implementation and make sure intermediate value dtypes are large enough to avoid uint32 overflow when needed.

    enhancement low priority 
    opened by JossWhittle 0
  • Modify core encode decode functions to pass a dict to the prediction function

    Modify core encode decode functions to pass a dict to the prediction function

    Currently the lowres inputs are passed directly to the prediction_fn as the only input.

    • Modify to accept a dict that has at least one key for the lowres input.

    • Provide boolean flag to also pass a positional encoding tensor along with the lowres which the model can use if needed.

    • Chunked encode decode will need to generate the correct chunks of the positional encoding for the current chunk.

    • Model can choose how to use positional encodings.

      • Image case would receive (B, H, W, 2) tensor containing the Y and X coordinates of each pixel in the trailing axis.
      • Volume case would receive (B, D, H, W, 3) tensor containing the Z, Y, and X coordinates of each voxel in the trailing axis.
    enhancement high priority 
    opened by JossWhittle 0
  • Look at decompressing sliced chunks

    Look at decompressing sliced chunks

    Decompress sliced chunk of image or volume without needing to decompress the entire data element.

    • May require applying secondary compression in blocks to avoid needing to decompress the full level maps, only to apply the predictor to the target slice.

    • Instead unpack just the blocks needed for the slice then trim.

    • A kompressor (or stack of) trained to secondary compress the maps from the primary kompressor (or stack of) would be able to naturally handle slice chunked decoding.

      • Could such a secondary compressor be shared between levels? Between multiple kompressors in the primary stack?
    experiment low priority 
    opened by JossWhittle 0
  • Look at compressing timeseries data

    Look at compressing timeseries data

    • Experiment with implementing the 1D case for compressing signals.
    • Video as sequence of 2D frames using the 3D volume code directly.
    • Look at compressing within timestep using information from neighbouring timesteps without actually compressing (dropping frames) the temporal axis.
    experiment low priority 
    opened by JossWhittle 0
Releases(v0.0.0)
Owner
Rosalind Franklin Institute
The Rosalind Franklin Institute is dedicated to transforming life science through interdisciplinary research and technology development
Rosalind Franklin Institute
PyTorch original implementation of Cross-lingual Language Model Pretraining.

XLM NEW: Added XLM-R model. PyTorch original implementation of Cross-lingual Language Model Pretraining. Includes: Monolingual language model pretrain

Facebook Research 2.7k Dec 27, 2022
Code for the Convolutional Vision Transformer (ConViT)

ConViT : Vision Transformers with Convolutional Inductive Biases This repository contains PyTorch code for ConViT. It builds on code from the Data-Eff

Facebook Research 418 Jan 06, 2023
DeceFL: A Principled Decentralized Federated Learning Framework

DeceFL: A Principled Decentralized Federated Learning Framework This repository comprises codes that reproduce experiments in Ye, et al (2021), which

Huazhong Artificial Intelligence Lab (HAIL) 10 May 31, 2022
TorchXRayVision: A library of chest X-ray datasets and models.

torchxrayvision A library for chest X-ray datasets and models. Including pre-trained models. ( 🎬 promo video about the project) Motivation: While the

Machine Learning and Medicine Lab 575 Jan 08, 2023
Codes of the paper Deformable Butterfly: A Highly Structured and Sparse Linear Transform.

Deformable Butterfly: A Highly Structured and Sparse Linear Transform DeBut Advantages DeBut generalizes the square power of two butterfly factor matr

Rui LIN 8 Jun 10, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
Video Contrastive Learning with Global Context

Video Contrastive Learning with Global Context (VCLR) This is the official PyTorch implementation of our VCLR paper. Install dependencies environments

143 Dec 26, 2022
code for Image Manipulation Detection by Multi-View Multi-Scale Supervision

MVSS-Net Code and models for ICCV 2021 paper: Image Manipulation Detection by Multi-View Multi-Scale Supervision Update 22.02.17, Pretrained model for

dong_chengbo 131 Dec 30, 2022
StarGAN - Official PyTorch Implementation (CVPR 2018)

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

Yunjey Choi 5.1k Dec 30, 2022
FairEdit: Preserving Fairness in Graph Neural Networks through Greedy Graph Editing

FairEdit Relevent Publication FairEdit: Preserving Fairness in Graph Neural Networks through Greedy Graph Editing

5 Feb 04, 2022
A Python multilingual toolkit for Sentiment Analysis and Social NLP tasks

pysentimiento: A Python toolkit for Sentiment Analysis and Social NLP tasks A Transformer-based library for SocialNLP classification tasks. Currently

298 Jan 07, 2023
Self Driving RC Car Code

Derp Learning Derp Learning is a Python package that collects data, trains models, and then controls an RC car for track racing. Hardware You will nee

Not Karol 39 Dec 07, 2022
Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)

Learning to Simulate Dynamic Environments with GameGAN PyTorch code for GameGAN Learning to Simulate Dynamic Environments with GameGAN Seung Wook Kim,

199 Dec 26, 2022
Predicting the duration of arrival delays for commercial flights.

Flight Delay Prediction Our objective is to predict arrival delays of commercial flights. According to the US Department of Transportation, about 21%

Jordan Silke 1 Jan 11, 2022
AlgoVision - A Framework for Differentiable Algorithms and Algorithmic Supervision

NeurIPS 2021 Paper "Learning with Algorithmic Supervision via Continuous Relaxations"

Felix Petersen 76 Jan 01, 2023
Paddle Graph Learning (PGL) is an efficient and flexible graph learning framework based on PaddlePaddle

DOC | Quick Start | δΈ­ζ–‡ Breaking News !! πŸ”₯ πŸ”₯ πŸ”₯ OGB-LSC KDD CUP 2021 winners announced!! (2021.06.17) Super excited to announce our PGL team won TWO

1.5k Jan 06, 2023
Implementation of "Learning to Match Features with Seeded Graph Matching Network" ICCV2021

SGMNet Implementation PyTorch implementation of SGMNet for ICCV'21 paper "Learning to Match Features with Seeded Graph Matching Network", by Hongkai C

87 Dec 11, 2022
Neural Motion Learner With Python

Neural Motion Learner Introduction This work is to extract skeletal structure from volumetric observations and to learn motion dynamics from the detec

Jinseok Bae 14 Nov 28, 2022
Code I use to automatically update my videos' metadata on YouTube

mCodingYouTube This repository contains the code I use to automatically update my videos' metadata on YouTube, including: titles, descriptions, tags,

James Murphy 19 Oct 07, 2022
Python Library for Signal/Image Data Analysis with Transport Methods

PyTransKit Python Transport Based Signal Processing Toolkit Website and documentation: https://pytranskit.readthedocs.io/ Installation The library cou

24 Dec 23, 2022