Vector Quantization, in Pytorch

Overview

Vector Quantization - Pytorch

A vector quantization library originally transcribed from Deepmind's tensorflow implementation, made conveniently into a package. It uses exponential moving averages to update the dictionary.

VQ has been successfully used by Deepmind and OpenAI for high quality generation of images (VQ-VAE-2) and music (Jukebox).

Install

$ pip install vector-quantize-pytorch

Usage

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 512,     # codebook size
    decay = 0.8,             # the exponential moving average decay, lower means the dictionary will change faster
    commitment = 1.          # the weight on the commitment loss
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x) # (1, 1024, 256), (1, 1024), (1)

Variants

This paper proposes to use multiple vector quantizers to recursively quantize the residuals of the waveform. You can use this with the ResidualVQ class and one extra initialization parameter.

import torch
from vector_quantize_pytorch import ResidualVQ

residual_vq = ResidualVQ(
    dim = 256,
    num_quantizers = 8,      # specify number of quantizers
    codebook_size = 1024,    # codebook size
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = residual_vq(x)

# (1, 1024, 256), (8, 1, 1024), (8, 1)
# (batch, seq, dim), (quantizer, batch, seq), (quantizer, batch)

Initialization

The SoundStream paper proposes that the codebook should be initialized by the kmeans centroids of the first batch. You can easily turn on this feature with one flag kmeans_init = True, for either VectorQuantize or ResidualVQ class

import torch
from vector_quantize_pytorch import ResidualVQ

residual_vq = ResidualVQ(
    dim = 256,
    codebook_size = 256,
    num_quantizers = 4,
    kmeans_init = True,   # set to True
    kmeans_iters = 10     # number of kmeans iterations to calculate the centroids for the codebook on init
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = residual_vq(x)

Increasing codebook usage

This repository will contain a few techniques from various papers to combat "dead" codebook entries, which is a common problem when using vector quantizers.

Lower codebook dimension

The Improved VQGAN paper proposes to have the codebook kept in a lower dimension. The encoder values are projected down before being projected back to high dimensional after quantization. You can set this with the codebook_dim hyperparameter.

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 256,
    codebook_dim = 16      # paper proposes setting this to 32 or as low as 8 to increase codebook usage
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x)

Cosine similarity

The Improved VQGAN paper also proposes to l2 normalize the codes and the encoded vectors, which boils down to using cosine similarity for the distance. They claim enforcing the vectors on a sphere leads to improvements in code usage and downstream reconstruction. You can turn this on by setting use_cosine_sim = True

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 256,
    use_cosine_sim = True   # set this to True
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x)

Expiring stale codes

Finally, the SoundStream paper has a scheme where they replace codes that have hits below a certain threshold with randomly selected vector from the current batch. You can set this threshold with threshold_ema_dead_code keyword.

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 512,
    threshold_ema_dead_code = 2  # should actively replace any codes that have an exponential moving average cluster size less than 2
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x)

Citations

@misc{oord2018neural,
    title   = {Neural Discrete Representation Learning},
    author  = {Aaron van den Oord and Oriol Vinyals and Koray Kavukcuoglu},
    year    = {2018},
    eprint  = {1711.00937},
    archivePrefix = {arXiv},
    primaryClass = {cs.LG}
}
@misc{zeghidour2021soundstream,
    title   = {SoundStream: An End-to-End Neural Audio Codec},
    author  = {Neil Zeghidour and Alejandro Luebs and Ahmed Omran and Jan Skoglund and Marco Tagliasacchi},
    year    = {2021},
    eprint  = {2107.03312},
    archivePrefix = {arXiv},
    primaryClass = {cs.SD}
}
@inproceedings{anonymous2022vectorquantized,
    title   = {Vector-quantized Image Modeling with Improved {VQGAN}},
    author  = {Anonymous},
    booktitle = {Submitted to The Tenth International Conference on Learning Representations },
    year    = {2022},
    url     = {https://openreview.net/forum?id=pfNyExj7z2},
    note    = {under review}
}
Comments
  • Quantizers are not DDP/AMP compliant

    Quantizers are not DDP/AMP compliant

    Hi Lucidrains,

    Thanks for the amazing work you do by implementing all those papers!

    Is there a plan to make the Quantizer be compliant with:

    • DDP - They need an all gather before calculating anything so the updates are exactly the same across all ranks
    • AMP - In my experience, if AMP touches upon the quantizers it screws up the gradient magnitudes making it NaN/Overflow

    If you want I can have a go at it.

    opened by danieltudosiu 7
  • Commitment Loss Problems

    Commitment Loss Problems

    Hello,

    First of all, thank you so much for this powerful implementation.

    I have been researching to train some VQ-VAE to generate faces from FFHQ 128x128 and I always have the same problem if I use the commitment loss (0.25) and the gamma (0.99) like in the original paper, the commitment loss seems to grow infinitely. I know you said that it is an auxiliary loss and that is not that important but is this normal behavior? If not, how can I avoid for that to happen in the case I wanted to use this loss?

    Thank you so much in advance!

    opened by pedrocg42 6
  • fix dimensions: the codebook must look at data by taking each time fr…

    fix dimensions: the codebook must look at data by taking each time fr…

    …ame individually. In SoundStream article: "This vector quantizer learns a codebook of N vectors to encode each D-dimensional frame of enc(x)."

    opened by wesbz 5
  • kmeans and ddp hangs

    kmeans and ddp hangs

    kmeans and ddp hangs for me. ddp is initialized by pytorch lightning in my case. I have several questions:

    In https://github.com/lucidrains/vector-quantize-pytorch/blob/master/vector_quantize_pytorch/vector_quantize_pytorch.py#L98

    all_num_samples = all_gather_sizes(local_samples, dim = 0) should it be dim = 1 (as dim 0 is the codebook dimension)?

    Then in https://github.com/lucidrains/vector-quantize-pytorch/blob/master/vector_quantize_pytorch/vector_quantize_pytorch.py#L93 it just hangs for me. I am not totally sure, but I believe distributed.broadcast in

    https://github.com/lucidrains/vector-quantize-pytorch/blob/master/vector_quantize_pytorch/vector_quantize_pytorch.py#L90

    is called with incompatible shapes. See https://pytorch.org/docs/stable/distributed.html#torch.distributed.broadcast

    tensor must have the same number of elements in all processes participating in the collective.

    opened by tasptz 4
  • Cannot Converge with L2 Loss

    Cannot Converge with L2 Loss

    I am trying to quantize the latent vector. To be specific, I use a Encoder to get the latent representation z of the input. Then I try to quantize z, then send z into Decoder.

    However, during my experiment, I found the reconstruction loss cannot decrease with L2 loss, namely, the EuclideanCodebook. The model can converge with cosine similarity. Have any idea about this phenomenon?

    I think cosine similarity only considers the direction of the vector, instead of the scale of the vector. I still want to use EuclideanCodebook.

    opened by kingnobro 3
  • Error when using gloo as DDP backend

    Error when using gloo as DDP backend

    Hello! Thank you for your great work on implementing VQ layer. When I use the VQ layer in DDP mode and use gloo as the backend as suggested in README, I got the following error: terminate called after throwing an instance of 'gloo::EnforceNotMet' what(): [enforce fail at ../third_party/gloo/gloo/transport/tcp/pair.cc:510] op.preamble.length <= op.nbytes. 8773632 vs 8386560

    Do you have any ideas on how to solve this problem?
    I also tried to use nccl as the backend, however the program only hangs forever...

    opened by Saltychtao 3
  • codebook initialization

    codebook initialization

    Hi, Thank you for this great work. It's quite useful!

    I have been having problems with index collapse and I'm not sure where it's coming from. But upon digging into the code, it seems that when we're not using k-means to initialize the codebook vectors, randn (normal distribution) is used to initialize them. The vqvae paper specifically uses uniform distribution for initialization, which allows the authors to ignore KL divergence when training.

    This is from the vqvae paper: "Since we assume a uniform prior for z, the KL term that usually appears in the ELBO is constant w.r.t. the encoder parameters and can thus be ignored for training."

    Is there any reason why you changed to Normal distribution here?

    Thanks!

    opened by ramyamounir 3
  • possible papers (and code) of interest

    possible papers (and code) of interest

    Have you had a look at bitsandbytes?

    https://github.com/TimDettmers/bitsandbytes

    https://arxiv.org/abs/2208.07339

    https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/

    Also this paper on tradeoffs for various 8 bit quantization formats,

    https://arxiv.org/pdf/2206.02915v1.pdf

    opened by Thomas-MMJ 2
  • RQ-VAE: How can I get a list of all learned codebook vectors (as indexed in the

    RQ-VAE: How can I get a list of all learned codebook vectors (as indexed in the "indices")?

    Hi Lucid, i am working on quantizing CLIP image embeddings with your RQ-VAE. It works pretty well.

    Next I want to take all learned codebook vectors and add them to the vocab of a GPT (as frozen token embeddings).

    The idea is to train a GPT with CLIP image embeddings in between texts, e.g. IMAGE-CAPTION or TEXT-IMAGE-TEXT-IMAGE- ... Flamingo-style).

    If this works, then GPT could maybe also learn to generate quantized CLIP IM embeddings token by token --> and then e.g. show images through a.) retrieval or b.) a DALLE 2 decoder :)

    ... So my question is: Once the RQ-VAE is trained and i can get the quantized reconstructions and indices - How can I get a list or tensor of the actual codebook? (all possible vectors from the rq-vocab) :)

    opened by christophschuhmann 2
  • Expire codes heuristic is replacing inputs

    Expire codes heuristic is replacing inputs

    Thanks for the implementation!

    One question, should this

    https://github.com/lucidrains/vector-quantize-pytorch/blob/ebce893fff695845f7fe0f04d1400d2c29b94f98/vector_quantize_pytorch/vector_quantize_pytorch.py#L177

    be actually self.expire_codes_(quantize)?

    opened by kashif 2
  • orthogonal regularization loss useless?

    orthogonal regularization loss useless?

    because the codebooks are not registered as trainable parameters, and the orthogonal loss is only a function of the codebooks, is the orthogonal loss entirely useless?

    opened by GallagherCommaJack 2
  • EMA update on CosineCodebook

    EMA update on CosineCodebook

    The original VIT-VQGAN paper does not seem to use EMA update for codebook learning since their codebook is unit-normalized vectors.

    Particularly, to my understanding, EMA update does not quite make sense when the encoder outputs and codebook vectors are unit-normalized ones.

    What's your take on this? Should we NOT use EMA update with CosineCodebook?

    opened by le4m 3
  • Loss and Backprop Details

    Loss and Backprop Details

    Hi,

    During training the vqvae backprops on multiple losses. While inputting feature maps to the model, we are given a loss, shoud I manually backpropagate and update weights through (the good ol' loss.backward() and optimizer.step()) this or is it handled implicitly?

    opened by Malik7115 3
  • Missing parameter of beta

    Missing parameter of beta

    Hi, in the original VQVAE paper, the commit_loss is defined as

    (quantize.detach()-x) ** 2 + beta * (quantize - x.detach() ** 2)
    

    where the beta is usually to be 0.25. But the commit_loss is defined as the following in your implementation:

    F.mse_loss(quantize.detach(), x)
    

    So I wonder if the parameter beta is set to be 1 by default or if the second term is missing? Thank you very much.

    opened by Corleone-Huang 1
  • No way of training the codebook

    No way of training the codebook

    Hi! Could you please explain how the codebook vectors are updated if the codebook vectors are not required to be orthogonal?

    1. embed tensors in both Euclidean and CosineSim codebooks are registered as buffers, so they can't be updated at all
    2. There is no loss on the codebook vectors that moves them closer to the input

    Am I missing something? It seems that right now there is no way of updating the codebook vectors without the orthogonal loss.

    opened by RafailFridman 5
  • Plugging vector-quantize-pytorch into taming-transformers

    Plugging vector-quantize-pytorch into taming-transformers

    Hi,

    I noticed your architecture could be plugged within the pipeline from https://github.com/CompVis/taming-transformers. I have proposed a code here (https://github.com/tanouch/taming-transformers) doing that. It enables to properly compare the different features proposed in your repo (Lower codebook dimension, Cosine similarity, Orthogonal regularization loss, etc) with the original formulation.

    The code from this repo can be seen in both files

    • taming-transformers/taming/models/vqgan.py
    • taming-transformers/taming/modules/vqvae/quantize.py

    As you can see, it is easy to launch a large scale training with your proposed architecture.

    I am not sure this issue belongs here or in the taming-transformers repo. However, I thought you might be interested. Thanks again for your work and these open-sourced repositeries !

    opened by tanouch 2
Releases(0.10.14)
Owner
Phil Wang
Working with Attention. It's all we need
Phil Wang
Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance

Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance Project Page | Paper | Data This repository contains an implementatio

Lior Yariv 521 Dec 30, 2022
PyTorch implementation(s) of various ResNet models from Twitch streams.

pytorch-resnet-twitch PyTorch implementation(s) of various ResNet models from Twitch streams. Status: ResNet50 currently not working. Will update in n

Daniel Bourke 3 Jan 11, 2022
Differentiable simulation for system identification and visuomotor control

gradsim gradSim: Differentiable simulation for system identification and visuomotor control gradSim is a unified differentiable rendering and multiphy

105 Dec 18, 2022
ViSD4SA, a Vietnamese Span Detection for Aspect-based sentiment analysis dataset

UIT-ViSD4SA PACLIC 35 General Introduction This repository contains the data of the paper: Span Detection for Vietnamese Aspect-Based Sentiment Analys

Nguyễn Thị Thanh Kim 5 Nov 13, 2022
Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO)

V-MPO Simple code to demonstrate Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) in Pyt

Nugroho Dewantoro 9 Jun 06, 2022
Python version of the amazing Reaction Mechanism Generator (RMG).

Reaction Mechanism Generator (RMG) Description This repository contains the Python version of Reaction Mechanism Generator (RMG), a tool for automatic

Reaction Mechanism Generator 284 Dec 27, 2022
E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

11 Nov 08, 2022
Real-world Anomaly Detection in Surveillance Videos- pytorch Re-implementation

Real world Anomaly Detection in Surveillance Videos : Pytorch RE-Implementation This repository is a re-implementation of "Real-world Anomaly Detectio

seominseok 62 Dec 08, 2022
A PyTorch implementation of EventProp [https://arxiv.org/abs/2009.08378], a method to train Spiking Neural Networks

Spiking Neural Network training with EventProp This is an unofficial PyTorch implemenation of EventProp, a method to compute exact gradients for Spiki

Pedro Savarese 35 Jul 29, 2022
A repo to show how to use custom dataset to train s2anet, and change backbone to resnext101

A repo to show how to use custom dataset to train s2anet, and change backbone to resnext101

jedibobo 3 Dec 28, 2022
HMLLDB is a collection of LLDB commands to assist in the debugging of iOS apps.

HMLLDB is a collection of LLDB commands to assist in the debugging of iOS apps. 中文介绍 Features Non-intrusive. Your iOS project does not need to be modi

mao2020 47 Oct 22, 2022
Unbalanced Feature Transport for Exemplar-based Image Translation (CVPR 2021)

UNITE and UNITE+ Unbalanced Feature Transport for Exemplar-based Image Translation (CVPR 2021) Unbalanced Intrinsic Feature Transport for Exemplar-bas

Fangneng Zhan 183 Nov 09, 2022
Official code for paper "Demystifying Local Vision Transformer: Sparse Connectivity, Weight Sharing, and Dynamic Weight"

Demysitifing Local Vision Transformer, arxiv This is the official PyTorch implementation of our paper. We simply replace local self attention by (dyna

138 Dec 28, 2022
Detectron2-FC a fast construction platform of neural network algorithm based on detectron2

What is Detectron2-FC Detectron2-FC a fast construction platform of neural network algorithm based on detectron2. We have been working hard in two dir

董晋宗 9 Jun 06, 2022
Code for project: "Learning to Minimize Remainder in Supervised Learning".

Learning to Minimize Remainder in Supervised Learning Code for project: "Learning to Minimize Remainder in Supervised Learning". Requirements and Envi

Yan Luo 0 Jul 18, 2021
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation

📖 Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022) 🔥 If DaGAN is helpful in your photos/projects, please hel

Fa-Ting Hong 503 Jan 04, 2023
3D position tracking for soccer players with multi-camera videos

This repo contains a full pipeline to support 3D position tracking of soccer players, with multi-view calibrated moving/fixed video sequences as inputs.

Yuchang Jiang 72 Dec 27, 2022
a reimplementation of Optical Flow Estimation using a Spatial Pyramid Network in PyTorch

pytorch-spynet This is a personal reimplementation of SPyNet [1] using PyTorch. Should you be making use of this work, please cite the paper according

Simon Niklaus 269 Jan 02, 2023
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

Chen Kai 24 Dec 05, 2022
Official implementation of "SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers"

SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers Figure 1: Performance of SegFormer-B0 to SegFormer-B5. Project page

NVIDIA Research Projects 1.4k Dec 31, 2022