A pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.

Overview

Compact Bilinear Pooling for PyTorch.

This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.

This version relies on the FFT implementation provided with PyTorch 0.4.0 onward. For older versions of PyTorch, use the tag v0.3.0.

Installation

Run the setup.py, for instance:

python setup.py install

Usage

class compact_bilinear_pooling.CompactBilinearPooling(input1_size, input2_size, output_size, h1 = None, s1 = None, h2 = None, s2 = None)

Basic usage:

from compact_bilinear_pooling import CountSketch, CompactBilinearPooling

input_size = 2048
output_size = 16000
mcb = CompactBilinearPooling(input_size, input_size, output_size).cuda()
x = torch.rand(4,input_size).cuda()
y = torch.rand(4,input_size).cuda()

z = mcb(x,y)

Test

A couple of test of the implementation of Compact Bilinear Pooling and its gradient can be run using:

python test.py

References

Comments
  • The value in ComplexMultiply_backward function

    The value in ComplexMultiply_backward function

    Hi @gdlg, thanks for this nice work. I'm confused about the backward procedure of complex multiplication. So I hope you can help me to figure it out.

    In forward,

    Z = XY = (Rx + i * Ix)(Ry + i * Iy) = (RxRy - IxIy) + i * (IxRy + RxIy) = Rz + i * Iz
    

    In backward, according the chain rule, it will has

    grad_(L/X) = grad_(L/Z) * grad(Z/X)
               = grad_Z * Y
               = (R_gz + i * I_gz)(Ry + i * Iy)
               = (R_gzRy - I_gzIy) + i * (I_gzRy + R_gzIy)
    

    So, why is this line implemented by using the value = 1 for real part and value = -1 for image part?

    Is there something wrong in my thoughts? Thanks.

    opened by KaiyuYue 8
  • The miss of Rfft

    The miss of Rfft

    When I run the test module, it indicates that the module of pytorch_fft of fft in autograd does not have attribute of Rfft. What version of pytorch_fft should I install to fit this code?

    opened by PeiqinZhuang 8
  • Save the model - TypeError: can't pickle Rfft objects

    Save the model - TypeError: can't pickle Rfft objects

    How do you save and load the model, I'm using torch.save, which cause the following error:

    File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 135, in save
       return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickl                                                                                                                               e_protocol))
     File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 117, in _with_file_like
       return body(f)
     File "xanaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 135, in <lambda>
       return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickl                                                                                                                               e_protocol))
     File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 198, in _save
       pickler.dump(obj)
    TypeError: can't pickle Rfft objects
    
    
    opened by idansc 3
  • Multi GPU support

    Multi GPU support

    I modify

    class CompactBilinearPooling(nn.Module):   
         def forward(self, x, y):    
                return CompactBilinearPoolingFn.apply(self.sketch1.h, self.sketch1.s, self.sketch2.h, self.sketch2.s, self.output_size, x, y)
    

    to

    def forward(self, x):    
        x = x.permute(0, 2, 3, 1) #NCHW to NHWC   
        y = Variable(x.data.clone())    
        out = (CompactBilinearPoolingFn.apply(self.sketch1.h, self.sketch1.s, self.sketch2.h, self.sketch2.s, self.output_size, x, y)).permute(0,3,1,2) #to NCHW    
        out = nn.functional.adaptive_avg_pool2d(out, 1) # N,C,1,1   
        #add an element-wise signed square root layer and an instance-wise l2 normalization    
        out = (torch.sqrt(nn.functional.relu(out)) - torch.sqrt(nn.functional.relu(-out)))/torch.norm(out,2,1,True)   
        return out 
    

    This makes the compact pooling layer can be plugged to PyTorch CNNs more easily:

    model.avgpool = CompactBilinearPooling(input_C, input_C, bilinear['dim'])
    model.fc = nn.Linear(int(model.fc.in_features/input_C*bilinear['dim']), num_classes)

    However, when I run this using multiple GPUs, I got the following error:

    Traceback (most recent call last): File "train3_bilinear_pooling.py", line 400, in run() File "train3_bilinear_pooling.py", line 219, in run train(train_loader, model, criterion, optimizer, epoch) File "train3_bilinear_pooling.py", line 326, in train return _each_epoch('train', train_loader, model, criterion, optimizer, epoch) File "train3_bilinear_pooling.py", line 270, in _each_epoch output = model(input_var) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 319, in call result = self.forward(*input, **kwargs) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 67, in forward replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 72, in replicate return replicate(module, device_ids) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 19, in replicate buffer_copies = comm.broadcast_coalesced(buffers, devices) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/cuda/comm.py", line 55, in broadcast_coalesced for chunk in _take_tensors(tensors, buffer_size): File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/_utils.py", line 232, in _take_tensors if tensor.is_sparse: File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/autograd/variable.py", line 68, in getattr return object.getattribute(self, name) AttributeError: 'Variable' object has no attribute 'is_sparse'

    Do you have any ideas?

    opened by YanWang2014 3
  • AssertionError: False is not true

    AssertionError: False is not true

    Hi, I am back again. When running the test.py, I got the following error File "test.py", line 69, in test_gradients self.assertTrue(torch.autograd.gradcheck(cbp, (x,y), eps=1)) AssertionError: False is not true

    What does this mean?

    opened by YanWang2014 2
  • Support for Pytorch 1.11?

    Support for Pytorch 1.11?

    Hi, torch.fft() and torch.irfft() are no more functions, those are modules. And there appears to be a lof of modification in the parameters. I am currently trying to combine the two types of features with compact bilinear pooling, do you know how to port this code to pytorch 1.11?

    opened by bhosalems 1
  • Training does not converge after joining compact bilinear layer

    Training does not converge after joining compact bilinear layer

    Source code: x = self.features(x) #[4,512,28,28] batch_size = x.size(0) x = (torch.bmm(x, torch.transpose(x, 1, 2)) / 28 ** 2).view(batch_size, -1) x = torch.nn.functional.normalize(torch.sign(x) * torch.sqrt(torch.abs(x) + 1e-10)) x = self.classifiers(x) return x my code: x = self.features(x) #[4,512,28,28] x = x.view(x.shape[0], x.shape[1], -1) #[4,512,784] x = x.permute(0, 2, 1) #[4,784,512] x = self.mcb(x,x) #[4,784,512] batch_size = x.size(0) x = x.sum(1) #对于二维来说,dim=0,对列求和;dim=1对行求和;在这里是三维所以是对列求和 x = torch.nn.functional.normalize(torch.sign(x) * torch.sqrt(torch.abs(x) + 1e-10)) x = self.classifiers(x) return x

    The training does not converge after modification. Why? Is it a problem with my code?

    opened by roseif 3
Releases(v0.4.0)
Owner
Grégoire Payen de La Garanderie
Grégoire Payen de La Garanderie
Tutorial for surrogate gradient learning in spiking neural networks

SpyTorch A tutorial on surrogate gradient learning in spiking neural networks Version: 0.4 This repository contains tutorial files to get you started

Friedemann Zenke 203 Nov 28, 2022
GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

Joel Huang 15 Dec 24, 2022
A PyTorch implementation of L-BFGS.

PyTorch-LBFGS: A PyTorch Implementation of L-BFGS Authors: Hao-Jun Michael Shi (Northwestern University) and Dheevatsa Mudigere (Facebook) What is it?

Hao-Jun Michael Shi 478 Dec 27, 2022
Code snippets created for the PyTorch discussion board

PyTorch misc Collection of code snippets I've written for the PyTorch discussion board. All scripts were testes using the PyTorch 1.0 preview and torc

461 Dec 26, 2022
PyTorch extensions for fast R&D prototyping and Kaggle farming

Pytorch-toolbelt A pytorch-toolbelt is a Python library with a set of bells and whistles for PyTorch for fast R&D prototyping and Kaggle farming: What

Eugene Khvedchenya 1.3k Jan 05, 2023
PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf

README TabNet : Attentive Interpretable Tabular Learning This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attent

DreamQuark 2k Dec 27, 2022
On the Variance of the Adaptive Learning Rate and Beyond

RAdam On the Variance of the Adaptive Learning Rate and Beyond We are in an early-release beta. Expect some adventures and rough edges. Table of Conte

Liyuan Liu 2.5k Dec 27, 2022
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference

PyTorch implementation of [1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference] This demonstrates pruning a VGG16 based

Jacob Gildenblat 836 Dec 26, 2022
Fast Discounted Cumulative Sums in PyTorch

TODO: update this README! Fast Discounted Cumulative Sums in PyTorch This repository implements an efficient parallel algorithm for the computation of

Daniel Povey 7 Feb 17, 2022
Code for paper "Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking"

model_based_energy_constrained_compression Code for paper "Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and

Haichuan Yang 16 Jun 15, 2022
Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Martin Krasser 251 Dec 25, 2022
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.

Pretrained models for Pytorch (Work in progress) The goal of this repo is: to help to reproduce research papers results (transfer learning setups for

Remi 8.7k Dec 31, 2022
PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

PyTorch Sparse This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently

Matthias Fey 757 Jan 04, 2023
An implementation of Performer, a linear attention-based transformer, in Pytorch

Performer - Pytorch An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random

Phil Wang 900 Dec 22, 2022
A tiny package to compare two neural networks in PyTorch

Compare neural networks by their feature similarity

Anand Krishnamoorthy 180 Dec 30, 2022
Bunch of optimizer implementations in PyTorch

Bunch of optimizer implementations in PyTorch

Hyeongchan Kim 76 Jan 03, 2023
TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

1k Dec 28, 2022
Fast and Easy-to-use Distributed Graph Learning for PyTorch Geometric

Fast and Easy-to-use Distributed Graph Learning for PyTorch Geometric

Quiver Team 221 Dec 22, 2022
Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
Distiller is an open-source Python package for neural network compression research.

Wiki and tutorials | Documentation | Getting Started | Algorithms | Design | FAQ Distiller is an open-source Python package for neural network compres

Intel Labs 4.1k Dec 28, 2022