An open source framework for seq2seq models in PyTorch.

Overview

pytorch-seq2seq

Build Status Join the chat at https://gitter.im/pytorch-seq2seq/Lobby

Documentation

This is a framework for sequence-to-sequence (seq2seq) models implemented in PyTorch. The framework has modularized and extensible components for seq2seq models, training and inference, checkpoints, etc. This is an alpha release. We appreciate any kind of feedback or contribution.

What's New in 0.1.6

  • Compatible with PyTorch 0.4
  • Added support for pre-trained word embedding

Roadmap

Seq2seq is a fast evolving field with new techniques and architectures being published frequently. The goal of this library is facilitating the development of such techniques and applications. While constantly improving the quality of code and documentation, we will focus on the following items:

  • Evaluation with benchmarks such as WMT machine translation, COCO image captioning, conversational models, etc;
  • Provide more flexible model options, improving the usability of the library;
  • Adding latest architectures such as the CNN based model proposed by Convolutional Sequence to Sequence Learning and the transformer model proposed by Attention Is All You Need;
  • Support features in the new versions of PyTorch.

Installation

This package requires Python 2.7 or 3.6. We recommend creating a new virtual environment for this project (using virtualenv or conda).

Prerequisites

  • Numpy: pip install numpy (Refer here for problem installing Numpy).
  • PyTorch: Refer to PyTorch website to install the version w.r.t. your environment.

Install from source

Currently we only support installation from source code using setuptools. Checkout the source code and run the following commands:

pip install -r requirements.txt
python setup.py install

If you already had a version of PyTorch installed on your system, please verify that the active torch package is at least version 0.1.11.

Get Started

Prepare toy dataset

# Run script to generate the reverse toy dataset
# The generated data is stored in data/toy_reverse by default
scripts/toy.sh

Train and play

TRAIN_PATH=data/toy_reverse/train/data.txt
DEV_PATH=data/toy_reverse/dev/data.txt
# Start training
python examples/sample.py --train_path $TRAIN_PATH --dev_path $DEV_PATH

It will take about 3 minutes to train on CPU and less than 1 minute with a Tesla K80. Once training is complete, you will be prompted to enter a new sequence to translate and the model will print out its prediction (use ctrl-C to terminate). Try the example below!

Input:  1 3 5 7 9
Expected output: 9 7 5 3 1 EOS

Checkpoints

Checkpoints are organized by experiments and timestamps as shown in the following file structure

experiment_dir
+-- input_vocab
+-- output_vocab
+-- checkpoints
|  +-- YYYY_mm_dd_HH_MM_SS
   |  +-- decoder
   |  +-- encoder
   |  +-- model_checkpoint

The sample script by default saves checkpoints in the experiment folder of the root directory. Look at the usages of the sample code for more options, including resuming and loading from checkpoints.

Benchmarks

  • WMT Machine Translation (Coming soon)

Troubleshoots and Contributing

If you have any questions, bug reports, and feature requests, please open an issue on Github. For live discussions, please go to our Gitter lobby.

We appreciate any kind of feedback or contribution. Feel free to proceed with small issues like bug fixes, documentation improvement. For major contributions and new features, please discuss with the collaborators in corresponding issues.

Development Cycle

We are using 4-week release cycles, where during each cycle changes will be pushed to the develop branch and finally merge to the master branch at the end of each cycle.

Development Environment

We setup the development environment using Vagrant. Run vagrant up with our 'Vagrantfile' to get started.

The following tools are needed and installed in the development environment by default:

  • Git
  • Python
  • Python packages: nose, mock, coverage, flake8

Test

The quality and the maintainability of the project is ensured by comprehensive tests. We encourage writing unit tests and integration tests when contributing new codes.

Locally please run nosetests in the package root directory to run unit tests. We use TravisCI to require that a pull request has to pass all unit tests to be eligible to merge. See travis configuration for more information.

Code Style

We follow PEP8 for code style. Especially the style of docstrings is important to generate documentation.

  • Local: Run the following commands in the package root directory
# Python syntax errors or undefined names
flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics
# Style checks
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
  • Github: We use Codacy to check styles on pull requests and branches.
Comments
  • pytorch-seq2seq slower than OpenNMT-py

    pytorch-seq2seq slower than OpenNMT-py

    Benchmarked the two implementations using WMT's newstest2013 from German to English. See training logs in the gist. Despite accuracy differences, pytorch-seq2seq is 10 times slower than OpenNMT.py.

    enhancement high priority 
    opened by kylegao91 7
  • Add a predictor method to return more than one possible sequence

    Add a predictor method to return more than one possible sequence

    Would be possible to add to this library a predictor_n (or to modify the current predictor) method to return more than one sequence as result? I think it would be a great tool to have when using beam search (with TopKDecoder).

    I coded a first attempt to do that (it seems to work, https://github.com/juan-cb/pytorch-seq2seq/commit/442431001b122fa15c4b6476a9d7411570f53f20), but I'm not sure if it is the best way to implement that or is completely correct. The desired behavior is to return the n most probable sequences given an src_seq.

    Thanks in advance

    opened by cbjuan 6
  • ValueError: lengths array has to be sorted in decreasing order

    ValueError: lengths array has to be sorted in decreasing order

    Took me a while to track this down, but there is an error if you run the sample code with the git version of torchtext.

    File "torch/nn/utils/rnn.py", line 79, in pack_padded_sequence
        raise ValueError("lengths array has to be sorted in decreasing order")
    

    The reason is this commit introduced a month ago in torchtext: https://github.com/pytorch/text/commit/a5049b9d70a699986ae839aca178c33376717cde

    This conflicts with this line in the supervised trainer: https://github.com/IBM/pytorch-seq2seq/blob/9e9fefb9dea882958c88e9c29cfbe9ea6d5408fc/seq2seq/trainer/supervised_trainer.py#L85

    Simply removing the negative sign fixes the issue, however this will break code if the pypi version of torchtext is used.

    Few fixes:

    1. Ask torchtext maintainers to revert this upstream change. See PR https://github.com/pytorch/text/pull/95
    2. Detect undesired sorting and reverse batch.
    3. Detect version of torchtext and sort accordingly.
    4. Add in option for sort direction into the supervised trainer.
    enhancement help wanted medium priority 
    opened by kyteague 6
  • fail to get meaningful response using pytorch-seq2seq for chatbot

    fail to get meaningful response using pytorch-seq2seq for chatbot

    I'm using pytorch-seq2seq for chatbot. I used two dataset ubuntu and twitter. I 've formatted the datasets, modified data path in "example.py" and tuned some hyper-parameters(e.g. hidden_size batch_size epoches).

    While I fail to get meaningful response after model finished training. When I typed in some sentences like hello how are you, it often gave me ['EOS'] or ['i', 'i', 'EOS']. Is there any suggestion to handle this issue?

    opened by DataTerminatorX 6
  • Creating pull request of hacks I needed to run sample.py on cuda

    Creating pull request of hacks I needed to run sample.py on cuda

    When I run even the basic sample.py script under examples on a cuda enabled machine, I still get errors that not all the vectors are cuda vectors (some are still cpu). You can ignore my edits in sample.py, but I did annotate in each place where I needed to add vector = vector.cuda() to make sample.py run. This occurs even when torch.device('cuda') is called, which should not be the case in PyTorch 0.4.0+.

    Thank you, and feel free to follow up with any questions.

    opened by DavidLKing 5
  • Doubt on

    Doubt on "pytorch-seq2seq/seq2seq/models/EncoderRNN.py"

    if self.variable_lengths: embedded = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths, batch_first=True) output, hidden = self.rnn(embedded) if self.variable_lengths: output, _ = nn.utils.rnn.pad_packed_sequence(output, batch_first=True) Hi, Why here goes two if self.variable_lengths?

    And this code doesn't identify the h0, c0, is that means they default to zero?

    Looking forward to your response. @kylegao91

    opened by caozhen-alex 5
  • Cuda.LongTensor instead of LongTensor on GPU

    Cuda.LongTensor instead of LongTensor on GPU

    I found this bug when running the basic script, example/sample.py:

    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.LongTensor for argument #3 'index'

    It seems to be a wrong type of tensor on GPU.

    fixed in develop 
    opened by ShilinHe 5
  • Compatibility of TopKDecoder with DecoderRNN

    Compatibility of TopKDecoder with DecoderRNN

    The codebase contains a TopKDecoder which can be used to do beam search while generating sentences. According to the docstring, the __init__ method takes as input a DecoderRNN object but the code is accessing attributes like .lang and .SOS_token_id which are not present in the DecoderRNN class.

    Also my understanding is that the TopKDecoder can be used to generate sentences after the DecoderRNN has been trained. Is this understanding correct.

    high priority fixed in develop 
    opened by abhiskk 5
  • GPU error when run sample code

    GPU error when run sample code

    When I run the sample code, python examples/sample.py --train_path $TRAIN_PATH --dev_path $DEV_PATH

    GPU errors appear as below, It seems data don't satisfy a gpu tensor, I failed to solve it. Has anyone meet the error, too?


    /home/Vachel/env3/lib/python3.5/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='elementwise_mean' instead. warnings.warn(warning.format(ret)) 2018-11-20 23:33:48,774 root INFO Namespace(dev_path='data/toy_reverse/dev/data.txt', expt_dir='./experiment', load_checkpoint=None, log_level='info', resume=False, train_path='data/toy_reverse/train/data.txt') /home/Vachel/env3/lib/python3.5/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead. warnings.warn(warning.format(ret)) /home/Vachel/env3/lib/python3.5/site-packages/torch/nn/modules/rnn.py:38: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1 "num_layers={}".format(dropout, num_layers)) 2018-11-20 23:33:51,817 seq2seq.trainer.supervised_trainer INFO Optimizer: Adam ( Parameter Group 0 amsgrad: False betas: (0.9, 0.999) eps: 1e-08 lr: 0.001 weight_decay: 0 ), Scheduler: None Traceback (most recent call last): File "examples/sample.py", line 129, in resume=opt.resume) File "/home/Vachel/SDML/hw3-0/pytorch-seq2seq/seq2seq/trainer/supervised_trainer.py", line 186, in train teacher_forcing_ratio=teacher_forcing_ratio) File "/home/Vachel/SDML/hw3-0/pytorch-seq2seq/seq2seq/trainer/supervised_trainer.py", line 103, in _train_epoches loss = self._train_batch(input_variables, input_lengths.tolist(), target_variables, model, teacher_forcing_ratio) File "/home/Vachel/SDML/hw3-0/pytorch-seq2seq/seq2seq/trainer/supervised_trainer.py", line 55, in _train_batch teacher_forcing_ratio=teacher_forcing_ratio) File "/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/Vachel/SDML/hw3-0/pytorch-seq2seq/seq2seq/models/seq2seq.py", line 48, in forward encoder_outputs, encoder_hidden = self.encoder(input_variable, input_lengths) File "/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/Vachel/SDML/hw3-0/pytorch-seq2seq/seq2seq/models/EncoderRNN.py", line 68, in forward embedded = self.embedding(input_var) File "/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/modules/sparse.py", line 110, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/functional.py", line 1110, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.LongTensor for argument #3 'index'

    duplicate fixed in develop 
    opened by vachelch 4
  • Adding python logger

    Adding python logger

    #7 Changing output logs to use the python logger rather than print statements.

    Earlier output :

    Namespace(dev_path='../tests/data/eng-fra.txt', expt_dir='./experiment', load_checkpoint=None, resume=False, train_path='../tests/data/eng-fra.txt')
    Reading lines...
    Read 100 lines
    Number of pairs: 100
    Reading lines...
    Read 100 lines
    Number of pairs: 100
    Finished epoch 1, Dev Perplexity: 143.2751
    Finished epoch 2, Dev Perplexity: 133.0537
    Time elapsed: 3s, Progress: 62%, Train Perplexity: 139.8005
    

    Current output :

    INFO:__main__:Namespace(dev_path='../tests/data/eng-fra.txt', expt_dir='./experiment', load_checkpoint=None, resume=False, train_path='../tests/data/eng-fra.txt')
    INFO:seq2seq.dataset.utils:Reading Lines form ../tests/data/eng-fra.txt
    Read 100 lines
    INFO:seq2seq.dataset.utils:
    Number of pairs: 100
    INFO:seq2seq.dataset.utils:Reading Lines form ../tests/data/eng-fra.txt
    Read 100 lines
    INFO:seq2seq.dataset.utils:
    Number of pairs: 100
    INFO:seq2seq.trainer.supervised_trainer:Finished epoch 1, Dev Perplexity: 136.4548
    INFO:seq2seq.trainer.supervised_trainer:Finished epoch 2, Dev Perplexity: 125.4301
    INFO:seq2seq.trainer.supervised_trainer:Time elapsed: 3s, Progress: 62%, Train Perplexity: 134.4973
    
    opened by avinash2692 4
  • GPU Tesla P100 vs Intel i7 CPU. GPU is only 2x faster.

    GPU Tesla P100 vs Intel i7 CPU. GPU is only 2x faster.

    Only a 2x speed up on a P100 Tesla vs a Intel i7 CPU

    GPU: Time elapsed: 4m 36s, Progress: 8%, Train Perplexity: 1.1057

    CPU: Time elapsed: 4m 1s, Progress: 3%, Train Perplexity: 1.1451

    Running the on SimpleQuestion dataset.

    opened by PetrochukM 4
  • Teacher forcing per timestep?

    Teacher forcing per timestep?

    Hi,

    I don't understand why the teacher forcing is being done per the whole sequence. The definition of the teacher forcing claims that at each timestep, a predicted or the ground truth token should be fed from the previous timestep. The implementation here, on the other hand, will first make a decision on whether generate the whole sequence with teacher forcing, and then continues decoding with teacher forcing set to True or False (for the whole sequence), which I believe is not correct.

    I really appreciate the feedback on this issue, Thanks!

    opened by aligholami 1
  • Out of memory for NLLLoss even the batch size is small

    Out of memory for NLLLoss even the batch size is small

    Hi I'm using this framework on my dataset, everything works fine on CPU, but when I moved them to gpu, it had the error as following: File "/home/ibm_decoder/DecoderRNN.py", line 107, in forward_step predicted_softmax = function(self.out(output.contiguous().view(-1, self.hidden_size)), dim=1).view(batch_size, output_size, -1) File "/home/anaconda2/envs/lib/python3.6/site-packages/torch/nn/functional.py", line 1317, in log_softmax ret = input.log_softmax(dim) RuntimeError: CUDA out of memory. Tried to allocate 2.77 GiB (GPU 0; 10.76 GiB total capacity; 8.66 GiB already allocated; 943.56 MiB free; 9.06 GiB reserved in total by PyTorch) The batch size is only 32, so I don't know what was wrong and what caused such big memory allocation.

    opened by serenayj 0
  • The dimension of predicted_softmax in DecoderRNN.py

    The dimension of predicted_softmax in DecoderRNN.py

    https://github.com/IBM/pytorch-seq2seq/blob/f146087a9a271e9b50f46561e090324764b081fb/seq2seq/models/DecoderRNN.py#L105 I think .view(batch_size, output_size, -1) should be .view(batch_size, -1, output_size) or this line just makes no sense.

    opened by tk1363704 0
  • Teacher forcing during beam decoding

    Teacher forcing during beam decoding

    https://github.com/IBM/pytorch-seq2seq/blob/f146087a9a271e9b50f46561e090324764b081fb/seq2seq/models/TopKDecoder.py#L83 .

    I think teacher_forcing should not be present in beam decoding, since ground truth tokens are not known during inference.

    opened by iamsimha 0
Releases(0.1.6)
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language mod

13.2k Jul 07, 2021
Code voor mijn Master project omtrent VideoBERT

Code voor masterproef Deze repository bevat de code voor het project van mijn masterproef omtrent VideoBERT. De code in deze repository is gebaseerd o

35 Oct 18, 2021
A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

flair 12.3k Dec 31, 2022
FireFlyer Record file format, writer and reader for DL training samples.

FFRecord The FFRecord format is a simple format for storing a sequence of binary records developed by HFAiLab, which supports random access and Linux

77 Jan 04, 2023
Reformer, the efficient Transformer, in Pytorch

Reformer, the Efficient Transformer, in Pytorch This is a Pytorch implementation of Reformer https://openreview.net/pdf?id=rkgNKkHtvB It includes LSH

Phil Wang 1.8k Dec 30, 2022
Help you discover excellent English projects and get rid of disturbing by other spoken language

GitHub English Top Charts 「Help you discover excellent English projects and get

GrowingGit 544 Jan 09, 2023
Scene Text Retrieval via Joint Text Detection and Similarity Learning

This is the code of "Scene Text Retrieval via Joint Text Detection and Similarity Learning". For more details, please refer to our CVPR2021 paper.

79 Nov 29, 2022
Source code of the "Graph-Bert: Only Attention is Needed for Learning Graph Representations" paper

Graph-Bert Source code of "Graph-Bert: Only Attention is Needed for Learning Graph Representations". Please check the script.py as the entry point. We

14 Mar 25, 2022
Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classifi

186 Dec 24, 2022
Machine translation models released by the Gourmet project

Gourmet Models Overview The Gourmet project has released several machine translation models to translate low-resource languages. This repository conta

Edinburgh NLP 5 Dec 08, 2021
💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants

Rasa Open Source Rasa is an open source machine learning framework to automate text-and voice-based conversations. With Rasa, you can build contextual

Rasa 15.3k Dec 30, 2022
Code for ACL 2022 main conference paper "STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation".

STEMM: Self-learning with Speech-Text Manifold Mixup for Speech Translation This is a PyTorch implementation for the ACL 2022 main conference paper ST

ICTNLP 29 Oct 16, 2022
The following links explain a bit the idea of semantic search and how search mechanisms work by doing retrieve and rerank

Main Idea The following links explain a bit the idea of semantic search and how search mechanisms work by doing retrieve and rerank Semantic Search Re

Sergio Arnaud Gomez 2 Jan 28, 2022
Code for the ACL 2021 paper "Structural Guidance for Transformer Language Models"

Structural Guidance for Transformer Language Models This repository accompanies the paper, Structural Guidance for Transformer Language Models, publis

International Business Machines 10 Dec 14, 2022
Ecommerce product title recognition package

revizor This package solves task of splitting product title string into components, like type, brand, model and article (or SKU or product code or you

Bureaucratic Labs 16 Mar 03, 2022
A collection of models for image - text generation in ACM MM 2021.

Bi-directional Image and Text Generation UMT-BITG (image & text generator) Unifying Multimodal Transformer for Bi-directional Image and Text Generatio

Multimedia Research 63 Oct 30, 2022
Synthetic data for the people.

zpy: Synthetic data in Blender. Website • Install • Docs • Examples • CLI • Contribute • Licence Abstract Collecting, labeling, and cleaning data for

Zumo Labs 253 Dec 21, 2022
Index different CKAN entities in Solr, not just datasets

ckanext-sitesearch Index different CKAN entities in Solr, not just datasets Requirements This extension requires CKAN 2.9 or higher and Python 3 Featu

Open Knowledge Foundation 3 Dec 02, 2022
Main repository for the chatbot Bobotinho.

Bobotinho Bot Main repository for the chatbot Bobotinho. ℹ️ Introduction Twitch chatbot with entertainment commands. ‎ 💻 Technologies Concurrent code

Bobotinho 14 Nov 29, 2022
Test finetuning of XLSR (multilingual wav2vec 2.0) for other speech classification tasks

wav2vec_finetune Test finetuning of XLSR (multilingual wav2vec 2.0) for other speech classification tasks Initial test: gender recognition on this dat

8 Aug 11, 2022