A Chinese to English Neural Model Translation Project

Overview

ZH-EN NMT Chinese to English Neural Machine Translation

This project is inspired by Stanford's CS224N NMT Project

Dataset used in this project: News Commentary v14

Intro

This project is more of a learning project to make myself familiar with Pytorch, machine translation, and NLP model training.

To investigate how would various setups of the recurrent layer affect the final performance, I compared Training Efficiency and Effectiveness of different types of RNN layer for encoder by changing one feature each time while controlling all other parameters:

  • RNN types

    • GRU
    • LSTM
  • Activation Functions on Output Layer

    • Tanh
    • ReLU
    • LeakyReLU
  • Number of layers

    • single layer
    • double layer

Code Files

_/
├─ utils.py # utilities
├─ vocab.py # generate vocab
├─ model_embeddings.py # embedding layer
├─ nmt_model.py # nmt model definition
├─ run.py # training and testing

Good Translation Examples

  • source: 相反,这意味着合作的基础应当是共同的长期战略利益,而不是共同的价值观。

    • target: Instead, it means that cooperation must be anchored not in shared values, but in shared long-term strategic interests.
    • translation: On the contrary, that means cooperation should be a common long-term strategic interests, rather than shared values.
  • source: 但这个问题其实很简单: 谁来承受这些用以降低预算赤字的紧缩措施的冲击。

    • target: But the issue is actually simple: Who will bear the brunt of measures to reduce the budget deficit?
    • translation: But the question is simple: Who is to bear the impact of austerity measures to reduce budget deficits?
  • source: 上述合作对打击恐怖主义、贩卖人口和移民可能发挥至关重要的作用。

    • target: Such cooperation is essential to combat terrorism, human trafficking, and migration.
    • translation: Such cooperation is essential to fighting terrorism, trafficking, and migration.
  • source: 与此同时, 政治危机妨碍着政府追求艰难的改革。

    • target: At the same time, political crisis is impeding the government’s pursuit of difficult reforms.
    • translation: Meanwhile, political crises hamper the government’s pursuit of difficult reforms.

Preprocessing

Preprocessing Colab notebook

  • using jieba to separate Chinese words by spaces

Generate Vocab From Training Data

  • Input: training data of Chinese and English

  • Output: a vocab file containing mapping from (sub)words to ids of Chinese and English -- a limited size of vocab is selected using SentencePiece (essentially Byte Pair Encoding of character n-grams) to cover around 99.95% of training data

Model Definition

  • a Seq2Seq model with attention

    This image is from the book DIVE INTO DEEP LEARNING

    • Encoder
      • A Recurrent Layer
    • Decoder
      • LSTMCell (hidden_size=512)
    • Attention
      • Multiplicative Attention

Training And Testing Results

Training Colab notebook

  • Hyperparameters:
    • Embedding Size & Hidden Size: 512
    • Dropout Rate: 0.25
    • Starting Learning Rate: 5e-4
    • Batch Size: 32
    • Beam Size for Beam Search: 10
  • NOTE: The BLEU score calculated here is based on the Test Set, so it could only be used to compare the relative effectiveness of the models using this data

For Experiment

  • Dataset: the dataset is split into training set(~260000), validation set(~20000), and testing set(~20000) randomly (they are the same for each experiment group)
  • Max Number of Iterations: 50000
  • NOTE: I've tried Vanilla-RNN(nn.RNN) in various ways, but the BLEU score turns out to be extremely low for it (absence of residual connections might be the issue)
    • I decided to not include it for comparison until the issue is resolved
Training Time(sec) BLEU Score on Test Set Training Perplexities Validation Perplexities
A. Bidirectional 1-Layer GRU with Tanh 5158.99 14.26
B. Bidirectional 1-Layer LSTM with Tanh 5150.31 16.20
C. Bidirectional 2-Layer LSTM with Tanh 6197.58 16.38
D. Bidirectional 1-Layer LSTM with ReLU 5275.12 14.01
E. Bidirectional 1-Layer LSTM with LeakyReLU(slope=0.1) 5292.58 14.87

Current Best Version

Bidirectional 2-Layer LSTM with Tanh, 1024 embed_size & hidden_size, trained 11517.19 sec (44000 iterations), BLEU score 17.95

Traning Time BLEU Score on Test Set Training Perplexities Validation Perplexities
Best Model 11517.19 17.95

Analysis

  • LSTM tends to have better performance than GRU (it has an extra set of parameters)
  • Tanh tends to be better since less information is lost
  • Making the LSTM deeper (more layers) could improve the performance, but it cost more time to train
  • Surprisingly, the training time for A, B, and D are roughly the same
    • the issue may be the dataset is not large enough, or the cloud service I used to train models does not perform consistently

Bad Examples & Case Analysis

  • source: 全球目击组织(Global Witness)的报告记录, 光是2015年就有16个国家的185人被杀。
    • target: A Global Witness report documented 185 killings across 16 countries in 2015 alone.
    • translation: According to the Global eye, the World Health Organization reported that 185 people were killed in 2015.
    • problems:
      • Information Loss: 16 countries
      • Unknown Proper Noun: Global Witness
  • source: 大自然给了足以满足每个人需要的东西, 但无法满足每个人的贪婪
    • target: Nature provides enough for everyone’s needs, but not for everyone’s greed.
    • translation: Nature provides enough to satisfy everyone.
    • problems:
      • Huge Information Loss
  • source: 我衷心希望全球经济危机和巴拉克·奥巴马当选总统能对新冷战的荒唐理念进行正确的评估。
    • target: It is my hope that the global economic crisis and Barack Obama’s presidency will put the farcical idea of a new Cold War into proper perspective.
    • translation: I do hope that the global economic crisis and President Barack Obama will be corrected for a new Cold War.
    • problems:
      • Action Sender And Receiver Exchanged
      • Failed To Translate Complex Sentence
  • source: 人们纷纷猜测欧元区将崩溃。
    • target: Speculation about a possible breakup was widespread.
    • translation: The eurozone would collapse.
    • problems:
      • Significant Information Loss

Means to Improve the NMT model

  • Dataset
    • The dataset is fairly small, and our model is not being trained thorough all data
    • Being a native Chinese speaker, I could not understand what some of the source sentences are saying
    • The target sentences are not informational comprehensive; they themselves need context to be understood (e.g. the target sentence in the last "Bad Examples")
    • Even for human, some of the source sentence was too hard to translate
  • Model Architecture
    • CNN & Transformer
    • character based model
    • Make the model even larger & deeper (... I need GPUs)
  • Tricks that might help
    • Add a proper noun dictionary to translate unknown proper nouns word-by-word (phrase-by-phrase)
    • Initialize (sub)word embedding with pretrained embedding

How To Run

  • Download the dataset you desire, and change all "./zh_en_data" in run.sh to the path where your data is stored
  • To run locally on a CPU (mostly for sanity check, CPU is not able to train the model)
    • set up the environment using conda/miniconda conda env create --file local env.yml
  • To run on a GPU
    • set up the environment and running process following the Colab notebook

Contact

If you have any questions or you have trouble running the code, feel free to contact me via email

Owner
Zhenbang Feng
Be an engineer, not a coder. [email protected]
Zhenbang Feng
CATs: Semantic Correspondence with Transformers

CATs: Semantic Correspondence with Transformers For more information, check out the paper on [arXiv]. Training with different backbones and evaluation

74 Dec 10, 2021
A Telegram bot to add notes to Flomo.

flomo bot 使用 Telegram 机器人发送笔记到你的 Flomo. 你需要有一台可访问 Telegram 的服务器。 Steps @BotFather 新建机器人,获取 token Flomo 官网获取 API,链接 https://flomoapp.com/mine?source=in

Zhen 44 Dec 30, 2022
NeuralQA: A Usable Library for Question Answering on Large Datasets with BERT

NeuralQA: A Usable Library for (Extractive) Question Answering on Large Datasets with BERT Still in alpha, lots of changes anticipated. View demo on n

Victor Dibia 220 Dec 11, 2022
Labelling platform for text using distant supervision

With DataQA, you can label unstructured text documents using rule-based distant supervision.

245 Aug 05, 2022
Script and models for clustering LAION-400m CLIP embeddings.

clustering-laion400m Script and models for clustering LAION-400m CLIP embeddings. Models were fit on the first million or so image embeddings. A subje

Peter Baylies 22 Oct 04, 2022
I label phrases on a scale of five values: negative, somewhat negative, neutral, somewhat positive, positive

I label phrases on a scale of five values: negative, somewhat negative, neutral, somewhat positive, positive. Obstacles like sentence negation, sarcasm, terseness, language ambiguity, and many others

1 Jan 13, 2022
An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.

Extracting OpenAI CLIP (Global/Grid) Features from Image and Text This repo aims at providing an easy to use and efficient code for extracting image &

Jianjie(JJ) Luo 13 Jan 06, 2023
fastNLP: A Modularized and Extensible NLP Framework. Currently still in incubation.

fastNLP fastNLP是一款轻量级的自然语言处理(NLP)工具包,目标是快速实现NLP任务以及构建复杂模型。 fastNLP具有如下的特性: 统一的Tabular式数据容器,简化数据预处理过程; 内置多种数据集的Loader和Pipe,省去预处理代码; 各种方便的NLP工具,例如Embedd

fastNLP 2.8k Jan 01, 2023
运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。

OlittleRer 运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。编程语言和工具包括Java、Python、Matlab、CPLEX、Gurobi、SCIP 等。 关注我们: 运筹小公众号 有问题可以直接在

运小筹 151 Dec 30, 2022
Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation

Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision Training Efficiency We show the training efficiency of our DSLP model b

Chenyang Huang 37 Jan 04, 2023
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Dec 31, 2022
SimpleChinese2 集成了许多基本的中文NLP功能,使基于 Python 的中文文字处理和信息提取变得简单方便。

SimpleChinese2 SimpleChinese2 集成了许多基本的中文NLP功能,使基于 Python 的中文文字处理和信息提取变得简单方便。 声明 本项目是为方便个人工作所创建的,仅有部分代码原创。

Ming 30 Dec 02, 2022
Example code for "Real-World Natural Language Processing"

Real-World Natural Language Processing This repository contains example code for the book "Real-World Natural Language Processing." AllenNLP (2.5.0 or

Masato Hagiwara 303 Dec 17, 2022
This project consists of data analysis and data visualization (done using python)of all IPL seasons from 2008 to 2019 and answering the most asked questions about the IPL.

IPL-data-analysis This project consists of data analysis and data visualization of all IPL seasons from 2008 to 2019 and answering the most asked ques

Sivateja A T 2 Feb 08, 2022
Write Python in Urdu - اردو میں کوڈ لکھیں

UrduPython Write simple Python in Urdu. How to Use Write Urdu code in سامپل۔پے The mappings are as following: "۔": ".", "،":

Saad A. Bazaz 26 Nov 27, 2022
基于Transformer的单模型、多尺度的VAE模型

UniVAE 基于Transformer的单模型、多尺度的VAE模型 介绍 https://kexue.fm/archives/8475 依赖 需要大于0.10.6版本的bert4keras(当前还没有推到pypi上,可以直接从GitHub上clone最新版)。 引用 @misc{univae,

苏剑林(Jianlin Su) 49 Aug 24, 2022
Train BPE with fastBPE, and load to Huggingface Tokenizer.

BPEer Train BPE with fastBPE, and load to Huggingface Tokenizer. Description The BPETrainer of Huggingface consumes a lot of memory when I am training

Lizhuo 1 Dec 23, 2021
To create a deep learning model which can explain the content of an image in the form of speech through caption generation with attention mechanism on Flickr8K dataset.

To create a deep learning model which can explain the content of an image in the form of speech through caption generation with attention mechanism on Flickr8K dataset.

Ragesh Hajela 0 Feb 08, 2022
Concept Modeling: Topic Modeling on Images and Text

Concept is a technique that leverages CLIP and BERTopic-based techniques to perform Concept Modeling on images.

Maarten Grootendorst 120 Dec 27, 2022
Différents programmes créant une interface graphique a l'aide de Tkinter pour simplifier la vie des étudiants.

GP211-Grand-Projet Ce repertoire contient tout les programmes nécessaires au bon fonctionnement de notre projet-logiciel. Cette interface graphique es

1 Dec 21, 2021