A PyTorch implementation of paper "Learning Shared Semantic Space for Speech-to-Text Translation", ACL (Findings) 2021

Overview

Chimera: Learning Shared Semantic Space for Speech-to-Text Translation


This is a Pytorch implementation for the "Chimera" paper Learning Shared Semantic Space for Speech-to-Text Translation https://arxiv.org/abs/2105.03095 (accepted by ACL Findings 2021), which aims to bridge the modality gap by unifying the task of MT (textual Machine Translation) and ST (Speech-to-Text Translation). It has achieved new SOTA performance on all 8 language pairs in MuST-C benchmark, by utilizing an external MT corpus.


This repository is up to now a nightly version, and is bug-prone because of code refactoring. Also it is not fully tested on configurations other than the authors' working environment yet. However, we encourage you to first have a look at the results and model codes to get a general impression of what this project is about.

The code base is forked from FairSeq repository https://github.com/pytorch/fairseq.git (without an actual forking operation) in Septempber 2020. It than lags behind the later updates in FairSeq, and both the codes and checkpoints are not compatible with currect Fairseq version. You will need to modify the model codes for checkpoint configurations if you want to follow the new FairSeq codes.

CONTRIBUTION: You are also more than welcomed to test our code on your machines, and report feedbacks on results, bugs and performance!



Results

Our model (Chimera) achieves new state-of-the-art results on all 8 language pairs on MuST-C:

Direction EN-DE EN-FR EN-RU EN-ES EN-IT EN-RO EN-PT EN-NL
BLEU 26.3 35.6 17.4 30.6 25.0 24.0 30.2 29.2

Chimera novelly learns M distinct "memories" to store specific types of semantic information from both audio and text inputs. Shown below is a visualization of the "Memories" learned by Chimera-16, which is a variant with M = 16. Each learned cluster represents a individual type of information, while each marker is a sentence sample. "+" and "." means text and audio samples, respectively.

We can see more clearly from below (left) that memories learn a well-clustered semantic space, forming a "semantic" alignment (rather than spatial) between audio and text inputs, while ignoring the modality differences.

On the right, we zoom in to focus one cluster in specific, and it can be easily observed that the vectors are well structured as well, with inputs with (probably one of) similar semantic features close in space to each other.

We can even focus on one instance of translation, and see how the memories works. Shown below visualizes the alignment between audio attention and text attention, which tightly gather around the diagonal line. Different colors represents different memories, which attend to different semantic segments of sentence / audio as shown in the figure.



Trained Checkpoints

Our trained checkpoints are available at:

Translation Direction filename External url
English-to-Deutsch Chimera_EN2DE.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2DE.pt
English-to-French Chimera_EN2FR.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2FR.pt
English-to-Russian Chimera_EN2RU.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2RU.pt
English-to-Espanol Chimera_EN2ES.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2ES.pt
English-to-Italiano Chimera_EN2IT.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2IT.pt
English-to-Romanian Chimera_EN2RO.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2RO.pt
English-to-Portuguese Chimera_EN2PT.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2PT.pt
English-to-Dutch Chimera_EN2NL.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2NL.pt



Interactive Translation

You can download any one checkpoint mentioned above to local, and translate local audios (only .wav files supported) to another language! To do this, you only need to run the model in an interactive mode. For example, you want to translate from English to Deutsh (DE) with an already trained checkpoint at $CHECKPOINT:

bash run.sh --script chimera/scripts/interactive-en2any-ST.sh \
    --target de --checkpoint $CHECKPOINT

The program will prompt an input file name like this:

2021-04-02 10:00:00 | INFO | fairseq_cli.interactive | Type the input sentence and press return:

After inputing the file name, the program will translate outputs like:

H-0     -1.0      ▁Nach ▁dem ...
D-0     -1.0      Nach dem ...
P-0     -1.0000 -1.0000 ...

NOTE: Do not input a file too large. Normally the model can translate 1~5 normal-length sentences in one time. If the input sentence is too long, the program could crash.

To exit the interactive mode, you only need to input an invalid file name.

To translate to other languages, remember to replace de with their language codes (in lower case):

Language Code
Deutsch (German) DE / de
French FR / fr
Espanol (Spanish) ES / es
Russian RU / ru
Italiano (Italian) IT / it
Romanian RO / ro
Portuguese PT / pt
Dutch (Netherlands) NL / nl



Training a Model on MuST-C

Let's first take a look at training an English-to-Deutsch model as an example.

Data Preparation

  1. Prerequisites and Configuration First check that requirements are met for pip in requirements.txt and for apt in apt-requirements.txt. Some items in the two files may be redundant, but we haven't got time to check and eliminate them.

For configuration, please set the global variables of $WMT_ROOT, $MUSTC_ROOT and SAVE_ROOT These will be where to put the datasets and checkpoints. For example:

export MUSTC_ROOT="speech_data/mustc"
export WMT_ROOT="wmt_data"
export SAVE_ROOT="checkpoints"
export target=de
mkdir -p $MUSTC_ROOT $WMT_ROOT $SAVE_ROOT

NOTE: This simple configuration is a prerequisite for most of the following steps. Here export target=de means the translation direction is English to Deutsch.

  1. Download and uncompress the EN-to-DE MuST-C dataset to $MUSTC_ROOT/en-$target. TIP: to speed up uncompressing a file too large, you can replace tar xzvf with: pigz -dc $TARFILE | tar xvf -

  2. Download the WMT to $WMT_ROOT/orig via:

bash chimera/prepare_data/download-wmt.sh --wmt14 --data-dir $WMT_ROOT --target $target

This may sometimes be too slow as the connection to statmt.org is not steady in some places. In this case you can turn to other faster download sources if possible.

  1. Append MuST-C text data to $WMT_ROOT, and prepare the datasets and produce a joint spm dictionary:
bash chimera/prepare_data/prepare-wmt-en2any.sh \
    --data-dir $WMT_ROOT --wmt14 --original-dev \
    --external mustc --target $target --subword spm
python3 chimera/prepare_data/prep_mustc_data.py \
    --data-root $MUSTC_ROOT --task wave \
    --ignore_fbank80 --joint_spm wmt14-en-$target-spm \
    --languages $target --vocab-type unigram --vocab-size 10000

NOTE: if the first command is executed correctly, you will see one line in the output:

Existing spm dictionary chimera/resources/wmt14-en-de-spm detected. Copying...

If not, the program will still produce one dictionary on the run and reports No existing spm detected. Learning unigram spm on wmt14_en_de/tmp/train.de-en ... This is okay in most cases, with the only risk being a potential mismatch to already trained checkpoints we provided.

Training

To reproduce the results in the last row in Figure 1 in paper, you can directly use the training scripts available as follows.

  1. Pre-training on MT data:
bash run.sh --script chimera/scripts/train-en2any-MT.sh \
    --target $target --dataset wmt14 --max_updates 500000

If you like, you can specify some arguments other than default values. The default setting is --seed 1 --num-gpus 8, which makes the command look like bash run.sh --script chimera/scripts/train-en2$target-MT.sh --seed 1 --num-gpus 8. Value for --num-gpus is recommended to be power of 2, and smaller than 8, e.g. {1, 2, 4, 8}.

  1. Fine-tuning on MuST-C data:
bash run.sh --script chimera/scripts/train-en2any-ST.sh \
    --target $target --dataset wmt14 --max_updates 150000

This script moves the MT-pre-trained model from ${MT_SAVE_DIR}/checkpoint_best.pt to ${ST_SAVE_DIR} as a initialization for ST fine-tuning.

Optionally, if you need to resume a single ST training, you can add argument --resume to the command to avoid overwriting the existing ${ST_SAVE_DIR}/checkpoint_last.pt.

The scripts in step 4 and 5 forks a separate background evaluation process while running. The process monitors $MT_SAVE_ROOT or $ST_SAVE_ROOT and evaluates any new checkpoints. Don't worry, it will be automatically killed after the training finishes, unless the script is Ctrl-C'ed, in which case, you can manually raise the suicide flag by touch chimera/tools/auto-generate-suicide.code to kill the background generation process.

Note that this automatic process only evaluates a single checkpoint (with no averaging), and with a low beam width.

  1. Averaging Checkpoints and Evaluate It

Suppose the best ST checkpoint is at epoch $BEST_EPOCH, and we want to averaging 7 checkpoints around it.

python3 chimera/tools/eval-average-checkpoint.py \
    --ckpt-dir $ST_SAVE_ROOT --number-of-ckpts 7 \
    --center-of-ckpts $BEST_EPOCH

Other Language Pairs

For language pairs English-to-{French, Russian, Espanol}, you only need to replace the export target=de with {fr, ru, es} in step 0, and then run the steps 1~5.

For language pairs English-to-{Italiano, Portuguese, Dutch, Romanian}, the MT data is different, so we need to modify Step 2 and 3. All other Steps remains unchanged.

English to Romanian

For Romanian, we use WMT16 corpora in our paper.

The Step 2 changes to

bash chimera/prepare_data/download-wmt.sh --wmt16 --data-dir $WMT_ROOT --target ro

Step 3 remains unchanged.

English to {Italiano, Portuguese, Dutch}

These language pairs uses OPUS100 as external MT corpora.

The Step 2 changes to

bash chimera/prepare_data/download-opus100.sh --data-dir $WMT_ROOT

Step 3 changes to

bash chimera/prepare_data/prepare-opus100-en2any.sh \
    --data-dir $WMT_ROOT --original-dev \
    --external mustc --target $target --subword spm
python3 chimera/prepare_data/prep_mustc_data.py \
    --data-root $MUSTC_ROOT --task wave \
    --ignore_fbank80 --joint_spm wmt14-en-$target-spm \
    --languages $target --vocab-type unigram --vocab-size 10000

Actually, only the first command of Step 3 changes.

Evaluating a Checkpoint

You can also manually evaluate the performance of any one checkpoint on MuST-C test set. Suppose the path to your checkpoint is $CHECKPOINT

target=de bash chimera/generate/generate-mustc-final.sh $CHECKPOINT



License

Part of codes (especially codes outside chimera/) is adapted from FAIRSEQ code base, therefore carrying the MIT License of its original codes. See NOTICE.md for more details.

Citation

Please cite as:

@article{han2021learning,
  title={Learning Shared Semantic Space for Speech-to-Text Translation},
  author={Han, Chi and Wang, Mingxuan and Ji, Heng and Li, Lei},
  journal={arXiv preprint arXiv:2105.03095},
  year={2021}
}
Owner
Chi Han
Undergraduate student in Tsinghua University, P.R. China
Chi Han
Predict an emoji that is associated with a text

Sentiment Analysis Sentiment analysis in computational linguistics is a general term for techniques that quantify sentiment or mood in a text. Can you

Tetsumichi(Telly) Umada 30 Sep 07, 2022
Pytorch NLP library based on FastAI

Quick NLP Quick NLP is a deep learning nlp library inspired by the fast.ai library It follows the same api as fastai and extends it allowing for quick

Agis pof 283 Nov 21, 2022
Prompt tuning toolkit for GPT-2 and GPT-Neo

mkultra mkultra is a prompt tuning toolkit for GPT-2 and GPT-Neo. Prompt tuning injects a string of 20-100 special tokens into the context in order to

61 Jan 01, 2023
Official code for Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset

Official code for our Interspeech 2021 - Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset [1]*. Visually-grounded spoken language datasets c

Ian Palmer 3 Jan 26, 2022
Predict the spans of toxic posts that were responsible for the toxic label of the posts

toxic-spans-detection An attempt at the SemEval 2021 Task 5: Toxic Spans Detection. The Toxic Spans Detection task of SemEval2021 required participant

Ilias Antonopoulos 3 Jul 24, 2022
Code and data accompanying Natural Language Processing with PyTorch

Natural Language Processing with PyTorch Build Intelligent Language Applications Using Deep Learning By Delip Rao and Brian McMahan Welcome. This is a

Joostware 1.8k Jan 01, 2023
An ultra fast tiny model for lane detection, using onnx_parser, TensorRTAPI, torch2trt to accelerate. our model support for int8, dynamic input and profiling. (Nvidia-Alibaba-TensoRT-hackathon2021)

Ultra_Fast_Lane_Detection_TensorRT An ultra fast tiny model for lane detection, using onnx_parser, TensorRTAPI to accelerate. our model support for in

steven.yan 121 Dec 27, 2022
BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model

BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model

303 Dec 17, 2022
code for modular summarization work published in ACL2021 by Krishna et al

This repository contains the code for running modular summarization pipelines as described in the publication Krishna K, Khosla K, Bigham J, Lipton ZC

Kundan Krishna 6 Jun 04, 2021
A Persian Image Captioning model based on Vision Encoder Decoder Models of the transformers🤗.

Persian-Image-Captioning We fine-tuning the Vision Encoder Decoder Model for the task of image captioning on the coco-flickr-farsi dataset. The implem

Hamtech-ai 15 Aug 25, 2022
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x using fastT5.

Reduce T5 model size by 3X and increase the inference speed up to 5X. Install Usage Details Functionalities Benchmarks Onnx model Quantized onnx model

Kiran R 399 Jan 05, 2023
Large-scale pretraining for dialogue

A State-of-the-Art Large-scale Pretrained Response Generation Model (DialoGPT) This repository contains the source code and trained model for a large-

Microsoft 1.8k Jan 07, 2023
A text augmentation tool for named entity recognition.

neraug This python library helps you with augmenting text data for named entity recognition. Augmentation Example Reference from An Analysis of Simple

Hiroki Nakayama 48 Oct 11, 2022
Active learning for text classification in Python

Active Learning allows you to efficiently label training data in a small-data scenario.

Webis 375 Dec 28, 2022
Shared code for training sentence embeddings with Flax / JAX

flax-sentence-embeddings This repository will be used to share code for the Flax / JAX community event to train sentence embeddings on 1B+ training pa

Nils Reimers 23 Dec 30, 2022
ChessCoach is a neural network-based chess engine capable of natural-language commentary.

ChessCoach is a neural network-based chess engine capable of natural-language commentary.

Chris Butner 380 Dec 03, 2022
:id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.

Dedupe Python Library dedupe is a python library that uses machine learning to perform fuzzy matching, deduplication and entity resolution quickly on

Dedupe.io 3.6k Jan 02, 2023
TLA - Twitter Linguistic Analysis

TLA - Twitter Linguistic Analysis Tool for linguistic analysis of communities TLA is built using PyTorch, Transformers and several other State-of-the-

Tushar Sarkar 47 Aug 14, 2022
本插件是pcrjjc插件的重置版,可以独立于后端api运行

pcrjjc2 本插件是pcrjjc重置版,不需要使用其他后端api,但是需要自行配置客户端 本项目基于AGPL v3协议开源,由于项目特殊性,禁止基于本项目的任何商业行为 配置方法 环境需求:.net framework 4.5及以上 jre8 别忘了装jre8 别忘了装jre8 别忘了装jre8

132 Dec 26, 2022
Deep learning for NLP crash course at ABBYY.

Deep NLP Course at ABBYY Deep learning for NLP crash course at ABBYY. Suggested textbook: Neural Network Methods in Natural Language Processing by Yoa

Dan Anastasyev 597 Dec 18, 2022