A PyTorch implementation of paper "Learning Shared Semantic Space for Speech-to-Text Translation", ACL (Findings) 2021

Overview

Chimera: Learning Shared Semantic Space for Speech-to-Text Translation


This is a Pytorch implementation for the "Chimera" paper Learning Shared Semantic Space for Speech-to-Text Translation https://arxiv.org/abs/2105.03095 (accepted by ACL Findings 2021), which aims to bridge the modality gap by unifying the task of MT (textual Machine Translation) and ST (Speech-to-Text Translation). It has achieved new SOTA performance on all 8 language pairs in MuST-C benchmark, by utilizing an external MT corpus.


This repository is up to now a nightly version, and is bug-prone because of code refactoring. Also it is not fully tested on configurations other than the authors' working environment yet. However, we encourage you to first have a look at the results and model codes to get a general impression of what this project is about.

The code base is forked from FairSeq repository https://github.com/pytorch/fairseq.git (without an actual forking operation) in Septempber 2020. It than lags behind the later updates in FairSeq, and both the codes and checkpoints are not compatible with currect Fairseq version. You will need to modify the model codes for checkpoint configurations if you want to follow the new FairSeq codes.

CONTRIBUTION: You are also more than welcomed to test our code on your machines, and report feedbacks on results, bugs and performance!



Results

Our model (Chimera) achieves new state-of-the-art results on all 8 language pairs on MuST-C:

Direction EN-DE EN-FR EN-RU EN-ES EN-IT EN-RO EN-PT EN-NL
BLEU 26.3 35.6 17.4 30.6 25.0 24.0 30.2 29.2

Chimera novelly learns M distinct "memories" to store specific types of semantic information from both audio and text inputs. Shown below is a visualization of the "Memories" learned by Chimera-16, which is a variant with M = 16. Each learned cluster represents a individual type of information, while each marker is a sentence sample. "+" and "." means text and audio samples, respectively.

We can see more clearly from below (left) that memories learn a well-clustered semantic space, forming a "semantic" alignment (rather than spatial) between audio and text inputs, while ignoring the modality differences.

On the right, we zoom in to focus one cluster in specific, and it can be easily observed that the vectors are well structured as well, with inputs with (probably one of) similar semantic features close in space to each other.

We can even focus on one instance of translation, and see how the memories works. Shown below visualizes the alignment between audio attention and text attention, which tightly gather around the diagonal line. Different colors represents different memories, which attend to different semantic segments of sentence / audio as shown in the figure.



Trained Checkpoints

Our trained checkpoints are available at:

Translation Direction filename External url
English-to-Deutsch Chimera_EN2DE.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2DE.pt
English-to-French Chimera_EN2FR.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2FR.pt
English-to-Russian Chimera_EN2RU.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2RU.pt
English-to-Espanol Chimera_EN2ES.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2ES.pt
English-to-Italiano Chimera_EN2IT.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2IT.pt
English-to-Romanian Chimera_EN2RO.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2RO.pt
English-to-Portuguese Chimera_EN2PT.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2PT.pt
English-to-Dutch Chimera_EN2NL.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2NL.pt



Interactive Translation

You can download any one checkpoint mentioned above to local, and translate local audios (only .wav files supported) to another language! To do this, you only need to run the model in an interactive mode. For example, you want to translate from English to Deutsh (DE) with an already trained checkpoint at $CHECKPOINT:

bash run.sh --script chimera/scripts/interactive-en2any-ST.sh \
    --target de --checkpoint $CHECKPOINT

The program will prompt an input file name like this:

2021-04-02 10:00:00 | INFO | fairseq_cli.interactive | Type the input sentence and press return:

After inputing the file name, the program will translate outputs like:

H-0     -1.0      ▁Nach ▁dem ...
D-0     -1.0      Nach dem ...
P-0     -1.0000 -1.0000 ...

NOTE: Do not input a file too large. Normally the model can translate 1~5 normal-length sentences in one time. If the input sentence is too long, the program could crash.

To exit the interactive mode, you only need to input an invalid file name.

To translate to other languages, remember to replace de with their language codes (in lower case):

Language Code
Deutsch (German) DE / de
French FR / fr
Espanol (Spanish) ES / es
Russian RU / ru
Italiano (Italian) IT / it
Romanian RO / ro
Portuguese PT / pt
Dutch (Netherlands) NL / nl



Training a Model on MuST-C

Let's first take a look at training an English-to-Deutsch model as an example.

Data Preparation

  1. Prerequisites and Configuration First check that requirements are met for pip in requirements.txt and for apt in apt-requirements.txt. Some items in the two files may be redundant, but we haven't got time to check and eliminate them.

For configuration, please set the global variables of $WMT_ROOT, $MUSTC_ROOT and SAVE_ROOT These will be where to put the datasets and checkpoints. For example:

export MUSTC_ROOT="speech_data/mustc"
export WMT_ROOT="wmt_data"
export SAVE_ROOT="checkpoints"
export target=de
mkdir -p $MUSTC_ROOT $WMT_ROOT $SAVE_ROOT

NOTE: This simple configuration is a prerequisite for most of the following steps. Here export target=de means the translation direction is English to Deutsch.

  1. Download and uncompress the EN-to-DE MuST-C dataset to $MUSTC_ROOT/en-$target. TIP: to speed up uncompressing a file too large, you can replace tar xzvf with: pigz -dc $TARFILE | tar xvf -

  2. Download the WMT to $WMT_ROOT/orig via:

bash chimera/prepare_data/download-wmt.sh --wmt14 --data-dir $WMT_ROOT --target $target

This may sometimes be too slow as the connection to statmt.org is not steady in some places. In this case you can turn to other faster download sources if possible.

  1. Append MuST-C text data to $WMT_ROOT, and prepare the datasets and produce a joint spm dictionary:
bash chimera/prepare_data/prepare-wmt-en2any.sh \
    --data-dir $WMT_ROOT --wmt14 --original-dev \
    --external mustc --target $target --subword spm
python3 chimera/prepare_data/prep_mustc_data.py \
    --data-root $MUSTC_ROOT --task wave \
    --ignore_fbank80 --joint_spm wmt14-en-$target-spm \
    --languages $target --vocab-type unigram --vocab-size 10000

NOTE: if the first command is executed correctly, you will see one line in the output:

Existing spm dictionary chimera/resources/wmt14-en-de-spm detected. Copying...

If not, the program will still produce one dictionary on the run and reports No existing spm detected. Learning unigram spm on wmt14_en_de/tmp/train.de-en ... This is okay in most cases, with the only risk being a potential mismatch to already trained checkpoints we provided.

Training

To reproduce the results in the last row in Figure 1 in paper, you can directly use the training scripts available as follows.

  1. Pre-training on MT data:
bash run.sh --script chimera/scripts/train-en2any-MT.sh \
    --target $target --dataset wmt14 --max_updates 500000

If you like, you can specify some arguments other than default values. The default setting is --seed 1 --num-gpus 8, which makes the command look like bash run.sh --script chimera/scripts/train-en2$target-MT.sh --seed 1 --num-gpus 8. Value for --num-gpus is recommended to be power of 2, and smaller than 8, e.g. {1, 2, 4, 8}.

  1. Fine-tuning on MuST-C data:
bash run.sh --script chimera/scripts/train-en2any-ST.sh \
    --target $target --dataset wmt14 --max_updates 150000

This script moves the MT-pre-trained model from ${MT_SAVE_DIR}/checkpoint_best.pt to ${ST_SAVE_DIR} as a initialization for ST fine-tuning.

Optionally, if you need to resume a single ST training, you can add argument --resume to the command to avoid overwriting the existing ${ST_SAVE_DIR}/checkpoint_last.pt.

The scripts in step 4 and 5 forks a separate background evaluation process while running. The process monitors $MT_SAVE_ROOT or $ST_SAVE_ROOT and evaluates any new checkpoints. Don't worry, it will be automatically killed after the training finishes, unless the script is Ctrl-C'ed, in which case, you can manually raise the suicide flag by touch chimera/tools/auto-generate-suicide.code to kill the background generation process.

Note that this automatic process only evaluates a single checkpoint (with no averaging), and with a low beam width.

  1. Averaging Checkpoints and Evaluate It

Suppose the best ST checkpoint is at epoch $BEST_EPOCH, and we want to averaging 7 checkpoints around it.

python3 chimera/tools/eval-average-checkpoint.py \
    --ckpt-dir $ST_SAVE_ROOT --number-of-ckpts 7 \
    --center-of-ckpts $BEST_EPOCH

Other Language Pairs

For language pairs English-to-{French, Russian, Espanol}, you only need to replace the export target=de with {fr, ru, es} in step 0, and then run the steps 1~5.

For language pairs English-to-{Italiano, Portuguese, Dutch, Romanian}, the MT data is different, so we need to modify Step 2 and 3. All other Steps remains unchanged.

English to Romanian

For Romanian, we use WMT16 corpora in our paper.

The Step 2 changes to

bash chimera/prepare_data/download-wmt.sh --wmt16 --data-dir $WMT_ROOT --target ro

Step 3 remains unchanged.

English to {Italiano, Portuguese, Dutch}

These language pairs uses OPUS100 as external MT corpora.

The Step 2 changes to

bash chimera/prepare_data/download-opus100.sh --data-dir $WMT_ROOT

Step 3 changes to

bash chimera/prepare_data/prepare-opus100-en2any.sh \
    --data-dir $WMT_ROOT --original-dev \
    --external mustc --target $target --subword spm
python3 chimera/prepare_data/prep_mustc_data.py \
    --data-root $MUSTC_ROOT --task wave \
    --ignore_fbank80 --joint_spm wmt14-en-$target-spm \
    --languages $target --vocab-type unigram --vocab-size 10000

Actually, only the first command of Step 3 changes.

Evaluating a Checkpoint

You can also manually evaluate the performance of any one checkpoint on MuST-C test set. Suppose the path to your checkpoint is $CHECKPOINT

target=de bash chimera/generate/generate-mustc-final.sh $CHECKPOINT



License

Part of codes (especially codes outside chimera/) is adapted from FAIRSEQ code base, therefore carrying the MIT License of its original codes. See NOTICE.md for more details.

Citation

Please cite as:

@article{han2021learning,
  title={Learning Shared Semantic Space for Speech-to-Text Translation},
  author={Han, Chi and Wang, Mingxuan and Ji, Heng and Li, Lei},
  journal={arXiv preprint arXiv:2105.03095},
  year={2021}
}
Owner
Chi Han
CS Graduate student at UIUC.
Chi Han
Smart discord chatbot integrated with Dialogflow

academic-NLP-chatbot Smart discord chatbot integrated with Dialogflow to interact with students naturally and manage different classes in a school. De

Tom Huynh 5 Oct 24, 2022
Prompt tuning toolkit for GPT-2 and GPT-Neo

mkultra mkultra is a prompt tuning toolkit for GPT-2 and GPT-Neo. Prompt tuning injects a string of 20-100 special tokens into the context in order to

61 Jan 01, 2023
This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm, corresponding to the paper Fully Supervised Speaker Diarization.

UIS-RNN Overview This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm. UIS-RNN solves the problem of s

Google 1.4k Dec 28, 2022
PyTranslator é simultaneamente um editor e tradutor de texto com diversos recursos e interface feito com coração e 100% em Python

PyTranslator O Que é e para que serve o PyTranslator? PyTranslator é simultaneamente um editor e tradutor de texto em com interface gráfica que usa a

Elizeu Barbosa Abreu 1 May 12, 2022
DANeS is an open-source E-newspaper dataset by collaboration between DATASET JSC (dataset.vn) and AIV Group (aivgroup.vn)

DANeS - Open-source E-newspaper dataset Source: Technology vector created by macrovector - www.freepik.com. DANeS is an open-source E-newspaper datase

DATASET .JSC 64 Aug 17, 2022
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

Phil Wang 5k Jan 02, 2023
code for modular summarization work published in ACL2021 by Krishna et al

This repository contains the code for running modular summarization pipelines as described in the publication Krishna K, Khosla K, Bigham J, Lipton ZC

Approximately Correct Machine Intelligence (ACMI) Lab 21 Nov 24, 2022
2021语言与智能技术竞赛:机器阅读理解任务

LICS2021 MRC 1. 项目&任务介绍 本项目基于官方给定的baseline(DuReader-Checklist-BASELINE)进行二次改造,对整个代码框架做了简单的重构,对核心网络结构添加了注释,解耦了数据读取的模块,并添加了阈值确认的功能,一些小的细节也做了改进。 本次任务为202

roar 29 Dec 05, 2022
This is the code for the EMNLP 2021 paper AEDA: An Easier Data Augmentation Technique for Text Classification

The baseline code is for EDA: Easy Data Augmentation techniques for boosting performance on text classification tasks

Akbar Karimi 81 Dec 09, 2022
A library for finding knowledge neurons in pretrained transformer models.

knowledge-neurons An open source repository replicating the 2021 paper Knowledge Neurons in Pretrained Transformers by Dai et al., and extending the t

EleutherAI 96 Dec 21, 2022
中文无监督SimCSE Pytorch实现

A PyTorch implementation of unsupervised SimCSE SimCSE: Simple Contrastive Learning of Sentence Embeddings 1. 用法 无监督训练 python train_unsup.py ./data/ne

99 Dec 23, 2022
Uncomplete archive of files from the European Nopsled Team

European Nopsled CTF Archive This is an archive of collected material from various Capture the Flag competitions that the European Nopsled team played

European Nopsled 4 Nov 24, 2021
Awesome-NLP-Research (ANLP)

Awesome-NLP-Research (ANLP)

Language, Information, and Learning at Yale 72 Dec 19, 2022
Unsupervised Abstract Reasoning for Raven’s Problem Matrices

Unsupervised Abstract Reasoning for Raven’s Problem Matrices This code is the implementation of our TIP paper. This is the first unsupervised abstract

Tao Zhuo 9 Dec 17, 2022
Code for producing Japanese GPT-2 provided by rinna Co., Ltd.

japanese-gpt2 This repository provides the code for training Japanese GPT-2 models. This code has been used for producing japanese-gpt2-medium release

rinna Co.,Ltd. 491 Jan 07, 2023
PyTorch source code of NAACL 2019 paper "An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models"

This repository contains source code for NAACL 2019 paper "An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models" (P

Alexandra Chronopoulou 89 Aug 12, 2022
novel deep learning research works with PaddlePaddle

Research 发布基于飞桨的前沿研究工作,包括CV、NLP、KG、STDM等领域的顶会论文和比赛冠军模型。 目录 计算机视觉(Computer Vision) 自然语言处理(Natrual Language Processing) 知识图谱(Knowledge Graph) 时空数据挖掘(Spa

1.5k Jan 03, 2023
A Streamlit web app that generates Rick and Morty stories using GPT2.

Rick and Morty Story Generator This project uses a pre-trained GPT2 model, which was fine-tuned on Rick and Morty transcripts, to generate new stories

₸ornike 33 Oct 13, 2022
Athena is an open-source implementation of end-to-end speech processing engine.

Athena is an open-source implementation of end-to-end speech processing engine. Our vision is to empower both industrial application and academic research on end-to-end models for speech processing.

Ke Technologies 34 Sep 08, 2022
Python bot created with Selenium that can guess the daily Wordle word correct 96.8% of the time.

Wordle_Bot Python bot created with Selenium that can guess the daily Wordle word correct 96.8% of the time. It will log onto the wordle website and en

Lucas Polidori 15 Dec 11, 2022