Translate - a PyTorch Language Library

Overview

NOTE

PyTorch Translate is now deprecated, please use fairseq instead.


Translate - a PyTorch Language Library

Translate is a library for machine translation written in PyTorch. It provides training for sequence-to-sequence models. Translate relies on fairseq, a general sequence-to-sequence library, which means that models implemented in both Translate and Fairseq can be trained. Translate also provides the ability to export some models to Caffe2 graphs via ONNX and to load and run these models from C++ for production purposes. Currently, we export components (encoder, decoder) to Caffe2 separately and beam search is implemented in C++. In the near future, we will be able to export the beam search as well. We also plan to add export support to more models.

Quickstart

If you are just interested in training/evaluating MT models, and not in exporting the models to Caffe2 via ONNX, you can install Translate for Python 3 by following these few steps:

  1. Install pytorch
  2. Install fairseq
  3. Clone this repository git clone https://github.com/pytorch/translate.git pytorch-translate && cd pytorch-translate
  4. Run python setup.py install

Provided you have CUDA installed you should be good to go.

Requirements and Full Installation

Translate Requires:

  • A Linux operating system with a CUDA compatible card
  • GNU C++ compiler version 4.9.2 and above
  • A CUDA installation. We recommend CUDA 8.0 or CUDA 9.0

Use Our Docker Image:

Install Docker and nvidia-docker, then run

sudo docker pull pytorch/translate
sudo nvidia-docker run -i -t --rm pytorch/translate /bin/bash
. ~/miniconda/bin/activate
cd ~/translate

You should now be able to run the sample commands in the Usage Examples section below. You can also see the available image versions under https://hub.docker.com/r/pytorch/translate/tags/.

Install Translate from Source:

These instructions were mainly tested on Ubuntu 16.04.5 LTS (Xenial Xerus) with a Tesla M60 card and a CUDA 9 installation. We highly encourage you to report an issue if you are unable to install this project for your specific configuration.

  • If you don't already have an existing Anaconda environment with Python 3.6, you can install one via Miniconda3:

    wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
    chmod +x miniconda.sh
    ./miniconda.sh -b -p ~/miniconda
    rm miniconda.sh
    . ~/miniconda/bin/activate
    
  • Clone the Translate repo:

    git clone https://github.com/pytorch/translate.git
    pushd translate
    
  • Install the PyTorch conda package:

    # Set to 8 or 9 depending on your CUDA version.
    TMP_CUDA_VERSION="9"
    
    # Uninstall previous versions of PyTorch. Doing this twice is intentional.
    # Error messages about torch not being installed are benign.
    pip uninstall -y torch
    pip uninstall -y torch
    
    # This may not be necessary if you already have the latest cuDNN library.
    conda install -y cudnn
    
    # Add LAPACK support for the GPU.
    conda install -y -c pytorch "magma-cuda${TMP_CUDA_VERSION}0"
    
    # Install the combined PyTorch nightly conda package.
    conda install pytorch-nightly cudatoolkit=${TMP_CUDA_VERSION}.0 -c pytorch
    
    # Install NCCL2.
    wget "https://s3.amazonaws.com/pytorch/nccl_2.1.15-1%2Bcuda${TMP_CUDA_VERSION}.0_x86_64.txz"
    TMP_NCCL_VERSION="nccl_2.1.15-1+cuda${TMP_CUDA_VERSION}.0_x86_64"
    tar -xvf "${TMP_NCCL_VERSION}.txz"
    rm "${TMP_NCCL_VERSION}.txz"
    
    # Set some environmental variables needed to link libraries correctly.
    export CONDA_PATH="$(dirname $(which conda))/.."
    export NCCL_ROOT_DIR="$(pwd)/${TMP_NCCL_VERSION}"
    export LD_LIBRARY_PATH="${CONDA_PATH}/lib:${NCCL_ROOT_DIR}/lib:${LD_LIBRARY_PATH}"
    
  • Install ONNX:

    git clone --recursive https://github.com/onnx/onnx.git
    yes | pip install ./onnx 2>&1 | tee ONNX_OUT
    

If you get a Protobuf compiler not found error, you need to install it:

conda install -c anaconda protobuf

Then, try to install ONNX again:

yes | pip install ./onnx 2>&1 | tee ONNX_OUT
  • Build Translate:

    pip uninstall -y pytorch-translate
    python3 setup.py build develop
    

Now you should be able to run the example scripts below!

Usage Examples

Note: the example commands given assume that you are the root of the cloned GitHub repository or that you're in the translate directory of the Docker or Amazon image. You may also need to make sure you have the Anaconda environment activated.

Training

We provide an example script to train a model for the IWSLT 2014 German-English task. We used this command to obtain a pretrained model:

bash pytorch_translate/examples/train_iwslt14.sh

The pretrained model actually contains two checkpoints that correspond to training twice with random initialization of the parameters. This is useful to obtain ensembles. This dataset is relatively small (~160K sentence pairs), so training will complete in a few hours on a single GPU.

Training with tensorboard visualization

We provide support for visualizing training stats with tensorboard. As a dependency, you will need tensorboard_logger installed.

pip install tensorboard_logger

Please also make sure that tensorboard is installed. It also comes with tensorflow installation.

You can use the above example script to train with tensorboard, but need to change line 10 from :

CUDA_VISIBLE_DEVICES=0 python3 pytorch_translate/train.py

to

CUDA_VISIBLE_DEVICES=0 python3 pytorch_translate/train_with_tensorboard.py

The event log directory for tensorboard can be specified by option --tensorboard_dir with a default value: run-1234. This directory is appended to your --save_dir argument.

For example in the above script, you can visualize with:

tensorboard --logdir checkpoints/runs/run-1234

Multiple runs can be compared by specifying different --tensorboard_dir. i.e. run-1234 and run-2345. Then

tensorboard --logdir checkpoints/runs

can visualize stats from both runs.

Pretrained Model

A pretrained model for IWSLT 2014 can be evaluated by running the example script:

bash pytorch_translate/examples/generate_iwslt14.sh

Note the improvement in performance when using an ensemble of size 2 instead of a single model.

Exporting a Model with ONNX

We provide an example script to export a PyTorch model to a Caffe2 graph via ONNX:

bash pytorch_translate/examples/export_iwslt14.sh

This will output two files, encoder.pb and decoder.pb, that correspond to the computation of the encoder and one step of the decoder. The example exports a single checkpoint (--checkpoint model/averaged_checkpoint_best_0.pt but is also possible to export an ensemble (--checkpoint model/averaged_checkpoint_best_0.pt --checkpoint model/averaged_checkpoint_best_1.pt). Note that during export, you can also control a few hyperparameters such as beam search size, word and UNK rewards.

Using the Model

To use the sample exported Caffe2 model to translate sentences, run:

echo "hallo welt" | bash pytorch_translate/examples/translate_iwslt14.sh

Note that the model takes in BPE inputs, so some input words need to be split into multiple tokens. For instance, "hineinstopfen" is represented as "hinein@@ stop@@ fen".

PyTorch Translate Research

We welcome you to explore the models we have in the pytorch_translate/research folder. If you use them and encounter any errors, please paste logs and a command that we can use to reproduce the error. Feel free to contribute any bugfixes or report your experience, but keep in mind that these models are a work in progress and thus are currently unsupported.

Join the Translate Community

We welcome contributions! See the CONTRIBUTING.md file for how to help out.

License

Translate is BSD-licensed, as found in the LICENSE file.

Pytorch implementation of Tacotron

Tacotron-pytorch A pytorch implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model. Requirements Install python 3 Install pytorc

soobin seo 203 Dec 02, 2022
Implemented shortest-circuit disambiguation, maximum probability disambiguation, HMM-based lexical annotation and BiLSTM+CRF-based named entity recognition

Implemented shortest-circuit disambiguation, maximum probability disambiguation, HMM-based lexical annotation and BiLSTM+CRF-based named entity recognition

0 Feb 13, 2022
Implementation for paper BLEU: a Method for Automatic Evaluation of Machine Translation

BLEU Score Implementation for paper: BLEU: a Method for Automatic Evaluation of Machine Translation Author: Ba Ngoc from ProtonX BLEU score is a popul

Ngoc Nguyen Ba 6 Oct 07, 2021
Search msDS-AllowedToActOnBehalfOfOtherIdentity

前言 现在进行RBCD的攻击手段主要是搜索mS-DS-CreatorSID,如果机器的创建者是我们可控的话,那就可以修改对应机器的msDS-AllowedToActOnBehalfOfOtherIdentity,利用工具SharpAllowedToAct-Modify 那我们索性也试试搜索所有计算机

Jumbo 26 Dec 05, 2022
Accurately generate all possible forms of an English word e.g "election" --> "elect", "electoral", "electorate" etc.

Accurately generate all possible forms of an English word Word forms can accurately generate all possible forms of an English word. It can conjugate v

Dibya Chakravorty 570 Dec 31, 2022
Simple text to phones converter for multiple languages

Phonemizer -- foʊnmaɪzɚ The phonemizer allows simple phonemization of words and texts in many languages. Provides both the phonemize command-line tool

CoML 762 Dec 29, 2022
PyTorch source code of NAACL 2019 paper "An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models"

This repository contains source code for NAACL 2019 paper "An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models" (P

Alexandra Chronopoulou 89 Aug 12, 2022
Sapiens is a human antibody language model based on BERT.

Sapiens: Human antibody language model ____ _ / ___| __ _ _ __ (_) ___ _ __ ___ \___ \ / _` | '_ \| |/ _ \ '

Merck Sharp & Dohme Corp. a subsidiary of Merck & Co., Inc. 13 Nov 20, 2022
华为商城抢购手机的Python脚本 Python script of Huawei Store snapping up mobile phones

HUAWEI STORE GO 2021 说明 基于Python3+Selenium的华为商城抢购爬虫脚本,修改自近两年没更新的项目BUY-HW,为女神抢Nova 8(什么时候华为开始学小米玩饥饿营销了?) 原项目的登陆以及抢购部分已经不可用,本项目对原项目进行了改正以适应新华为商城,并增加一些功能

ZhangLiang 111 Dec 22, 2022
Easy-to-use CPM for Chinese text generation

CPM 项目描述 CPM(Chinese Pretrained Models)模型是北京智源人工智能研究院和清华大学发布的中文大规模预训练模型。官方发布了三种规模的模型,参数量分别为109M、334M、2.6B,用户需申请与通过审核,方可下载。 由于原项目需要考虑大模型的训练和使用,需要安装较为复杂

382 Jan 07, 2023
Language-Agnostic SEntence Representations

LASER Language-Agnostic SEntence Representations LASER is a library to calculate and use multilingual sentence embeddings. NEWS 2019/11/08 CCMatrix is

Facebook Research 3.2k Jan 04, 2023
precise iris segmentation

PI-DECODER Introduction PI-DECODER, a decoder structure designed for Precise Iris Segmentation and Location. The decoder structure is shown below: Ple

8 Aug 08, 2022
A curated list of efficient attention modules

awesome-fast-attention A curated list of efficient attention modules

Sepehr Sameni 891 Dec 22, 2022
Repository for Graph2Pix: A Graph-Based Image to Image Translation Framework

Graph2Pix: A Graph-Based Image to Image Translation Framework Installation Install the dependencies in env.yml $ conda env create -f env.yml $ conda a

18 Nov 17, 2022
HAIS_2GNN: 3D Visual Grounding with Graph and Attention

HAIS_2GNN: 3D Visual Grounding with Graph and Attention This repository is for the HAIS_2GNN research project. Tao Gu, Yue Chen Introduction The motiv

Yue Chen 1 Nov 26, 2022
texlive expressions for documents

tex2nix Generate Texlive environment containing all dependencies for your document rather than downloading gigabytes of texlive packages. Installation

Jörg Thalheim 70 Dec 26, 2022
Data manipulation and transformation for audio signal processing, powered by PyTorch

torchaudio: an audio library for PyTorch The aim of torchaudio is to apply PyTorch to the audio domain. By supporting PyTorch, torchaudio follows the

1.9k Jan 08, 2023
NLP topic mdel LDA - Gathered from New York Times website

NLP topic mdel LDA - Gathered from New York Times website

1 Oct 14, 2021
A Plover python dictionary allowing for consistent symbol input with specification of attachment and capitalisation in one stroke.

Emily's Symbol Dictionary Design This dictionary was created with the following goals in mind: Have a consistent method to type (pretty much) every sy

Emily 68 Jan 07, 2023
Codes for coreference-aware machine reading comprehension

Data and code for the paper "Tracing Origins: Coreference-aware Machine Reading Comprehension" at ACL2022. Dataset There are three folders for our thr

11 Sep 29, 2022