Training RNNs as Fast as CNNs

Overview

News

SRU++, a new SRU variant, is released. [tech report] [blog]

The experimental code and SRU++ implementation are available on the dev branch which will be merged into master later.

About

SRU is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks.


Average processing time of LSTM, conv2d and SRU, tested on GTX 1070

For example, the figure above presents the processing time of a single mini-batch of 32 samples. SRU achieves 10 to 16 times speed-up compared to LSTM, and operates as fast as (or faster than) word-level convolution using conv2d.

Reference:

Simple Recurrent Units for Highly Parallelizable Recurrence [paper]

@inproceedings{lei2018sru,
  title={Simple Recurrent Units for Highly Parallelizable Recurrence},
  author={Tao Lei and Yu Zhang and Sida I. Wang and Hui Dai and Yoav Artzi},
  booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
  year={2018}
}

When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute [paper]

@article{lei2021srupp,
  title={When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute},
  author={Tao Lei},
  journal={arXiv preprint arXiv:2102.12459},
  year={2021}
}

Requirements

Install requirements via pip install -r requirements.txt.


Installation

From source:

SRU can be installed as a regular package via python setup.py install or pip install ..

From PyPi:

pip install sru

Directly use the source without installation:

Make sure this repo and CUDA library can be found by the system, e.g.

export PYTHONPATH=path_to_repo/sru
export LD_LIBRARY_PATH=/usr/local/cuda/lib64

Examples

The usage of SRU is similar to nn.LSTM. SRU likely requires more stacking layers than LSTM. We recommend starting by 2 layers and use more if necessary (see our report for more experimental details).

import torch
from sru import SRU, SRUCell

# input has length 20, batch size 32 and dimension 128
x = torch.FloatTensor(20, 32, 128).cuda()

input_size, hidden_size = 128, 128

rnn = SRU(input_size, hidden_size,
    num_layers = 2,          # number of stacking RNN layers
    dropout = 0.0,           # dropout applied between RNN layers
    bidirectional = False,   # bidirectional RNN
    layer_norm = False,      # apply layer normalization on the output of each layer
    highway_bias = -2,        # initial bias of highway gate (<= 0)
)
rnn.cuda()

output_states, c_states = rnn(x)      # forward pass

# output_states is (length, batch size, number of directions * hidden size)
# c_states is (layers, batch size, number of directions * hidden size)

Contributing

Please read and follow the guidelines.

Other Implementations

@musyoku had a very nice SRU implementaion in chainer.

@adrianbg implemented the first CPU version.


Comments
  • Enable both Pytorch native AMP and Nvidia APEX AMP for SRU

    Enable both Pytorch native AMP and Nvidia APEX AMP for SRU

    Hi!

    I was happily using SRUs with Pytorch native AMP, however I started experimenting with training using Microsoft DeepSpeed and bumped in to an issue.

    Basically the issues is that I observed that FP16 training using DeepSpeed doesn't work for both GRUs and SRUs. However when using Nvidia APEX AMP, DeepSpeed training using GRUs does work.

    So, based on the tips in one of your issues, I started looking in to how I could enable Pytorch native AMP and Nvidia APEX AMP for SRUs, so I could train models based on SRUs using DeepSpeed.

    That is why I created this pull request. Basically, I found that by making the code simpler, I can make SRUs work with both methods of AMP.

    Now amp_recurrence_fp16 can be used for both types of AMP. When amp_recurrence_fp16=True, the tensor's are cast to float16, otherwise nothing special happens. So, I also removed the torch.cuda.amp.autocast(enabled=False) region; I might be wrong, but it seems that we don't need it.

    I did some tests with my own code and it works in the different scenarios of interest:

    • Using PyTorch native AMP, not using DeepSpeed
    • Not using PyTorch native AMP, not using DeepSpeed
    • Using Nvidia APEX AMP, using DeepSpeed
    • Not using Nvidia APEX AMP, using DeepSpeed

    It would be beneficial if we can test this with an official SRU repo test, maybe repurposing the language_model/train_lm.py?

    opened by visionscaper 13
  • float16 handling

    float16 handling

    When I convert my model, which using this SRU unit, into float16 enabled one, it fails. Is this SRU not implemented to use in float16 environment, or is it hard to fix it?

    bug 
    opened by ywatanabe1989 11
  • support GPU inference in torchscript

    support GPU inference in torchscript

    This is on 3.0.0-dev branch for now

    A non-trivial PR to support GPU inference in torchscript

    • Load CUDA kernels as non-python modules; this is needed for torchscript compilation
    • Refactored CUDA APIs as functions that return output as tensors, instead of procedures that modify some passed-in tensors.
    • Added a workaround in case TS tries to locate and compile CUDA methods on machines that don't have CUDA / GPUs

    The refactored code has passed the forward() & backward() test. I also checked the outputs are the same for the non-torchscript and torchscript versions of the same model.

    opened by taoleicn 8
  • Error unpacking PackedSequence on latest version

    Error unpacking PackedSequence on latest version

    Hello @taolei87 , After updating to the latest version, my code broke. It works great on the previous 2.3.5 version and with nn.LSTM.

    File "C:\xxx\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
      result = self.forward(*input, **kwargs)
    File "C:\xxx\lib\site-packages\sru\modules.py", line 576, in forward
      mask_pad = (mask_pad >= batch_sizes.view(length, 1)).contiguous()
    RuntimeError: shape '[393, 1]' is invalid for input of size 384
    

    I can see that in the previous version the unpacking code on forward was different:

            input_packed = isinstance(input, nn.utils.rnn.PackedSequence)
            if input_packed:
                input, lengths = nn.utils.rnn.pad_packed_sequence(input)
                max_length = lengths.max().item()
                mask_pad = torch.ByteTensor([[0] * l + [1] * (max_length - l) for l in lengths.tolist()])
                mask_pad = mask_pad.to(input.device).transpose(0, 1).contiguous()
    

    Now is:

    
            orig_input = input
            if isinstance(orig_input, PackedSequence):
                input, batch_sizes, sorted_indices, unsorted_indices = input
                length = input.size(0)
                batch_size = input.size(1)
                mask_pad = torch.arange(batch_size,
                                        device=batch_sizes.device).expand(length, batch_size)
                mask_pad = (mask_pad >= batch_sizes.view(length, 1)).contiguous()
    
    bug 
    opened by bratao 8
  • Increasing GPU Usage each epoch

    Increasing GPU Usage each epoch

    I'm trying to implement a model that includes a SRUCell. This are my specs:

    Tesla M60 GPU torch.version: 0.4.1.post2 torch.cuda.version: 9.0.176

    Although its training, every epoch the memory usage in the GPU increases until it fills it. I made a toy example where this error occurs:

    import torch
    from torch.autograd import Variable
    from sru import SRUCell
    
    
    batch_size = 5
    seq_len = 60
    epochs = 1000
    cuda = torch.cuda.is_available()
    
    model = SRUCell(100, 100)
    
    if cuda:
        model.cuda(0)
    
    optimizer = torch.optim.Adam([
            {'params':model.parameters()}], lr=1e-3)
    
    loss_function = torch.nn.MSELoss()
        
    seq = Variable(torch.rand(batch_size,seq_len,100))
    y = Variable(torch.rand(batch_size,100))
    
    
    if cuda:
        seq = seq.cuda(0)
        y = y.cuda(0)
    
    
    model.train()
    
    for e in range(epochs):
        model.zero_grad()
        
        h = Variable(torch.zeros(batch_size, 100))
        c = Variable(torch.zeros(batch_size, 100))
        
        if cuda:
            h = h.cuda(0)
            c = c.cuda(0)
        
        for i in range(seq_len):
            x = seq[:,i,:]
            h, c = model(x, c)
        loss = loss_function(h, y)
        loss.backward()
        optimizer.step()
        print('Epoch: {} - Loss: {}'.format(e, loss))
    
    opened by santiag0m 8
  • Can i put hidden states in sru cell forward like in vanilla pytorch?

    Can i put hidden states in sru cell forward like in vanilla pytorch?

    In vanilla it work like this

    rnn = nn.LSTMCell(10, 20)
    input = torch.randn(6, 3, 10)
    hx = torch.randn(3, 20)
    cx = torch.randn(3, 20)
    output = []
    for i in range(6):
        hx, cx = rnn(input[i], (hx, cx))
        output.append(hx)
    

    How can i do same for sru cell?

    opened by hadaev8 7
  • AttributeError when preprocessing data for DrQA

    AttributeError when preprocessing data for DrQA

    Firstly i ran download.sh, and it succesfully downloaded glove and train/dev jsons for SQuAD. However, python prepro.py gave me this:

    Traceback (most recent call last):
      File "prepro.py", line 243, in <module>
        vocab_tag = list(nlp.tagger.tag_names)
    AttributeError: 'Tagger' object has no attribute 'tag_names'
    

    My Spacy version is 2.0.3, and it seems like something broke in update from 1.x that is written in requirements, and I didn't succeed in fixing it myself. Any suggests?

    opened by mojesty 7
  • Calculating Backwards For SRU Results in CUDA error.

    Calculating Backwards For SRU Results in CUDA error.

    I'm not sure how, but I'm seeing this error when I try to compute the backwards function. Don't know if you've come across this during your debug?

    Traceback (most recent call last):
      File "gan_language.py", line 341, in <module>
        G.backward(one)
      File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 156, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
      File "/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.py", line 98, in backward
        variables, grad_variables, retain_graph)
      File "/home/nick/wgan-gp/sru/cuda_functional.py", line 417, in backward
        stream=SRU_STREAM
      File "cupy/cuda/function.pyx", line 129, in cupy.cuda.function.Function.__call__ (cupy/cuda/function.cpp:4010)  File "cupy/cuda/function.pyx", line 111, in cupy.cuda.function._launch (cupy/cuda/function.cpp:3647)
      File "cupy/cuda/driver.pyx", line 127, in cupy.cuda.driver.launchKernel (cupy/cuda/driver.cpp:2541)
      File "cupy/cuda/driver.pyx", line 62, in cupy.cuda.driver.check_status (cupy/cuda/driver.cpp:1446)
    cupy.cuda.driver.CUDADriverError: CUDA_ERROR_INVALID_HANDLE: invalid resource handle
    
    opened by NickShahML 7
  • Speed up data loading / batching for ONE BILLION WORD experiment

    Speed up data loading / batching for ONE BILLION WORD experiment

    The data loading was inefficient and was found to be the bottleneck of BILLION WORD training. This PR rewrote the sharding (which data goes to a certain GPU / training process), and improved the training speed significantly.

    The figure compares a previous run and a new test run. We see 40% reduction on training time.

    This means our reported training efficiency will be much stronger from 59 GPU days to 36 GPU days, and 4x more efficient than FairSeq Transformer results.

    opened by taoleicn 6
  • Different input dimention compared to output dimension

    Different input dimention compared to output dimension

    Hi, I'm trying to implement a naive version of this paper in Keras, and was wondering how is the case that - n_in != n_out handled.

    I went through the code a few times, and couldn't understand the element wise multiplication of (1 - r_t) with x_t, if x_t is of a different shape than r_t.

    question 
    opened by titu1994 6
  • support GPU inference in torchscript model for v2.5 / v2.6

    support GPU inference in torchscript model for v2.5 / v2.6

    This PR works for master branch, v2.5 and v2.6 release

    A non-trivial PR to support GPU inference in torchscript

    • Load CUDA kernels as non-python modules; this is needed for torchscript compilation
    • Refactored CUDA APIs as functions that return output as tensors, instead of procedures that modify some passed-in tensors.
    • Added a workaround in case TS tries to locate and compile CUDA methods on machines that don't have CUDA / GPUs
    • The refactored code has passed the forward() & backward() test.
    • I also checked the outputs are the same for the non-torchscript and torchscript versions of the same model.
    opened by taoleicn 5
  • Mixed Precision Training

    Mixed Precision Training

    Hi,

    first of all I want to thank you for your great work. I'm using SRUs for speech enhancement, they do very well on a reasonable computational cost.

    I would like to know if there is a possibility to train SRUs in mixed precision mode? I tried to enable it, by setting precision=16 in the pytorch lightning trainer, but that didn't do the trick.

    Kind of regards, Zadagu

    opened by Zadagu 1
  • Any documentation on using SRU++ ?

    Any documentation on using SRU++ ?

    Hello, I've read and really appreciated your team's wonderful works on SRU++. I want to implement this architecture in other tasks, but i'm having problem finding the documentation on SRU++, as how I can use SRU++ the same way as SRU (calling directly from sru library after installing by pip install sru). I have looked into the dev-3.0.0 branch, which seems like the latest updated branch, but I still have no clues how to call and integrate sru++ modules into my custom defined pytorch modules. Could you help me ?

    opened by thangld201 1
  • FAILED: sru_cuda_kernel.cuda.o

    FAILED: sru_cuda_kernel.cuda.o

    when i run example, i meet this issue:FAILED: sru_cuda_kernel.cuda.o ,and in the end, it report ninja: build stopped: subcommand failed. what should i do to slove this problem?

    opened by xianyu-123 0
  • Avoid unintended eager cuda initialization

    Avoid unintended eager cuda initialization

    We noticed the package initialization for sru is eagerly triggering the initialization because of the following stack of module imports sru.modules -> sru.ops -> cuda_functional and this last module is executing the function load of torch.utils.cpp_extension.

    This was detected because of issues caused when running with the server framework in SUBPROCESS_MODE, that is forking a new process for it to run the model. We got an error complaining that CUDA had been already initialized in the parent process, which was not necessary because it is not meant to run the inference in the model.

    This PR changes this loading to be more lazy, more concretely we changed the code in sru.modules to avoid the eager import of sru.ops and instead postpone it to the instantiation of a first SRUCell.

    The changes in this PR have been tested doing a checkout of this branch in an AWS instance with GPU and running pytest -sv test which resulted in 141 passed, 161 warnings and no failures. So we understand this is working as expected for both CPU and GPU settings.

    opened by dkasapp 0
  • Unknown builtin op: sru_cuda::sru_bi_forward_simple

    Unknown builtin op: sru_cuda::sru_bi_forward_simple

    When using a bidirectional SRU, regular usage seems to be fine, and compilation to torchscript proceeds without error, but upon trying to infer with the compiled torchscript I get:

    Unknown builtin op: sru_cuda::sru_bi_forward_simple.

    Using pytorch 1.10, sru 2.6.0, cuda 11.3

    opened by ctlaltdefeat 2
Releases(v2.7.0-rc1)
Owner
ASAPP Research
AI for Enterprise
ASAPP Research
Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021)

Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021, official Pytorch implementatio

Microsoft 247 Dec 25, 2022
D2Go is a toolkit for efficient deep learning

D2Go D2Go is a production ready software system from FacebookResearch, which supports end-to-end model training and deployment for mobile platforms. W

Facebook Research 744 Jan 04, 2023
Why Are You Weird? Infusing Interpretability in Isolation Forest for Anomaly Detection

Why, hello there! This is the supporting notebook for the research paper — Why Are You Weird? Infusing Interpretability in Isolation Forest for Anomal

2 Dec 14, 2021
Patch2Pix: Epipolar-Guided Pixel-Level Correspondences [CVPR2021]

Patch2Pix for Accurate Image Correspondence Estimation This repository contains the Pytorch implementation of our paper accepted at CVPR2021: Patch2Pi

Qunjie Zhou 199 Nov 29, 2022
Udacity Suse Cloud Native Foundations Scholarship Course Walkthrough

SUSE Cloud Native Foundations Scholarship Udacity is collaborating with SUSE, a global leader in true open source solutions, to empower developers and

Shivansh Srivastava 34 Oct 18, 2022
The repo contains the code of the ACL2020 paper `Dice Loss for Data-imbalanced NLP Tasks`

Dice Loss for NLP Tasks This repository contains code for Dice Loss for Data-imbalanced NLP Tasks at ACL2020. Setup Install Package Dependencies The c

223 Dec 17, 2022
ISTR: End-to-End Instance Segmentation with Transformers (https://arxiv.org/abs/2105.00637)

This is the project page for the paper: ISTR: End-to-End Instance Segmentation via Transformers, Jie Hu, Liujuan Cao, Yao Lu, ShengChuan Zhang, Yan Wa

Jie Hu 182 Dec 19, 2022
Distributed Asynchronous Hyperparameter Optimization in Python

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

6.5k Jan 01, 2023
TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection

TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection; Accepted by ICCV2021. Note: The complete code (including training and t

S.X.Zhang 84 Dec 13, 2022
Estimating Example Difficulty using Variance of Gradients

Estimating Example Difficulty using Variance of Gradients This repository contains source code necessary to reproduce some of the main results in the

Chirag Agarwal 48 Dec 26, 2022
Implementation of Segnet, FCN, UNet , PSPNet and other models in Keras.

Image Segmentation Keras : Implementation of Segnet, FCN, UNet, PSPNet and other models in Keras. Implementation of various Deep Image Segmentation mo

Divam Gupta 2.6k Jan 05, 2023
CUAD

Contract Understanding Atticus Dataset This repository contains code for the Contract Understanding Atticus Dataset (CUAD), a dataset for legal contra

The Atticus Project 273 Dec 17, 2022
This is the official code for the paper "Learning with Nested Scene Modeling and Cooperative Architecture Search for Low-Light Vision"

RUAS This is the official code for the paper "Learning with Nested Scene Modeling and Cooperative Architecture Search for Low-Light Vision" A prelimin

Vision & Optimization Group (VOG) 2 May 05, 2022
Orthogonal Over-Parameterized Training

The inductive bias of a neural network is largely determined by the architecture and the training algorithm. To achieve good generalization, how to effectively train a neural network is of great impo

Weiyang Liu 11 Apr 18, 2022
Zen-NAS: A Zero-Shot NAS for High-Performance Deep Image Recognition

Zen-NAS: A Zero-Shot NAS for High-Performance Deep Image Recognition How Fast Compare to Other Zero-Shot NAS Proxies on CIFAR-10/100 Pre-trained Model

190 Dec 29, 2022
Code for "LoRA: Low-Rank Adaptation of Large Language Models"

LoRA: Low-Rank Adaptation of Large Language Models This repo contains the implementation of LoRA in GPT-2 and steps to replicate the results in our re

Microsoft 394 Jan 08, 2023
Procedural 3D data generation pipeline for architecture

Synthetic Dataset Generator Authors: Stanislava Fedorova Alberto Tono Meher Shashwat Nigam Jiayao Zhang Amirhossein Ahmadnia Cecilia bolognesi Dominik

Computational Design Institute 49 Nov 25, 2022
Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation".

FPS-Net Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation", accepted by ISPRS journal of Photogrammetry

15 Nov 30, 2022
MQBench Quantization Aware Training with PyTorch

MQBench Quantization Aware Training with PyTorch I am using MQBench(Model Quantization Benchmark)(http://mqbench.tech/) to quantize the model for depl

Ling Zhang 29 Nov 18, 2022
Detectron2 for Document Layout Analysis

Detectron2 trained on PubLayNet dataset This repo contains the training configurations, code and trained models trained on PubLayNet dataset using Det

Himanshu 163 Nov 21, 2022