A Chinese to English Neural Model Translation Project

Overview

ZH-EN NMT Chinese to English Neural Machine Translation

This project is inspired by Stanford's CS224N NMT Project

Dataset used in this project: News Commentary v14

Intro

This project is more of a learning project to make myself familiar with Pytorch, machine translation, and NLP model training.

To investigate how would various setups of the recurrent layer affect the final performance, I compared Training Efficiency and Effectiveness of different types of RNN layer for encoder by changing one feature each time while controlling all other parameters:

  • RNN types

    • GRU
    • LSTM
  • Activation Functions on Output Layer

    • Tanh
    • ReLU
    • LeakyReLU
  • Number of layers

    • single layer
    • double layer

Code Files

_/
├─ utils.py # utilities
├─ vocab.py # generate vocab
├─ model_embeddings.py # embedding layer
├─ nmt_model.py # nmt model definition
├─ run.py # training and testing

Good Translation Examples

  • source: 相反,这意味着合作的基础应当是共同的长期战略利益,而不是共同的价值观。

    • target: Instead, it means that cooperation must be anchored not in shared values, but in shared long-term strategic interests.
    • translation: On the contrary, that means cooperation should be a common long-term strategic interests, rather than shared values.
  • source: 但这个问题其实很简单: 谁来承受这些用以降低预算赤字的紧缩措施的冲击。

    • target: But the issue is actually simple: Who will bear the brunt of measures to reduce the budget deficit?
    • translation: But the question is simple: Who is to bear the impact of austerity measures to reduce budget deficits?
  • source: 上述合作对打击恐怖主义、贩卖人口和移民可能发挥至关重要的作用。

    • target: Such cooperation is essential to combat terrorism, human trafficking, and migration.
    • translation: Such cooperation is essential to fighting terrorism, trafficking, and migration.
  • source: 与此同时, 政治危机妨碍着政府追求艰难的改革。

    • target: At the same time, political crisis is impeding the government’s pursuit of difficult reforms.
    • translation: Meanwhile, political crises hamper the government’s pursuit of difficult reforms.

Preprocessing

Preprocessing Colab notebook

  • using jieba to separate Chinese words by spaces

Generate Vocab From Training Data

  • Input: training data of Chinese and English

  • Output: a vocab file containing mapping from (sub)words to ids of Chinese and English -- a limited size of vocab is selected using SentencePiece (essentially Byte Pair Encoding of character n-grams) to cover around 99.95% of training data

Model Definition

  • a Seq2Seq model with attention

    This image is from the book DIVE INTO DEEP LEARNING

    • Encoder
      • A Recurrent Layer
    • Decoder
      • LSTMCell (hidden_size=512)
    • Attention
      • Multiplicative Attention

Training And Testing Results

Training Colab notebook

  • Hyperparameters:
    • Embedding Size & Hidden Size: 512
    • Dropout Rate: 0.25
    • Starting Learning Rate: 5e-4
    • Batch Size: 32
    • Beam Size for Beam Search: 10
  • NOTE: The BLEU score calculated here is based on the Test Set, so it could only be used to compare the relative effectiveness of the models using this data

For Experiment

  • Dataset: the dataset is split into training set(~260000), validation set(~20000), and testing set(~20000) randomly (they are the same for each experiment group)
  • Max Number of Iterations: 50000
  • NOTE: I've tried Vanilla-RNN(nn.RNN) in various ways, but the BLEU score turns out to be extremely low for it (absence of residual connections might be the issue)
    • I decided to not include it for comparison until the issue is resolved
Training Time(sec) BLEU Score on Test Set Training Perplexities Validation Perplexities
A. Bidirectional 1-Layer GRU with Tanh 5158.99 14.26
B. Bidirectional 1-Layer LSTM with Tanh 5150.31 16.20
C. Bidirectional 2-Layer LSTM with Tanh 6197.58 16.38
D. Bidirectional 1-Layer LSTM with ReLU 5275.12 14.01
E. Bidirectional 1-Layer LSTM with LeakyReLU(slope=0.1) 5292.58 14.87

Current Best Version

Bidirectional 2-Layer LSTM with Tanh, 1024 embed_size & hidden_size, trained 11517.19 sec (44000 iterations), BLEU score 17.95

Traning Time BLEU Score on Test Set Training Perplexities Validation Perplexities
Best Model 11517.19 17.95

Analysis

  • LSTM tends to have better performance than GRU (it has an extra set of parameters)
  • Tanh tends to be better since less information is lost
  • Making the LSTM deeper (more layers) could improve the performance, but it cost more time to train
  • Surprisingly, the training time for A, B, and D are roughly the same
    • the issue may be the dataset is not large enough, or the cloud service I used to train models does not perform consistently

Bad Examples & Case Analysis

  • source: 全球目击组织(Global Witness)的报告记录, 光是2015年就有16个国家的185人被杀。
    • target: A Global Witness report documented 185 killings across 16 countries in 2015 alone.
    • translation: According to the Global eye, the World Health Organization reported that 185 people were killed in 2015.
    • problems:
      • Information Loss: 16 countries
      • Unknown Proper Noun: Global Witness
  • source: 大自然给了足以满足每个人需要的东西, 但无法满足每个人的贪婪
    • target: Nature provides enough for everyone’s needs, but not for everyone’s greed.
    • translation: Nature provides enough to satisfy everyone.
    • problems:
      • Huge Information Loss
  • source: 我衷心希望全球经济危机和巴拉克·奥巴马当选总统能对新冷战的荒唐理念进行正确的评估。
    • target: It is my hope that the global economic crisis and Barack Obama’s presidency will put the farcical idea of a new Cold War into proper perspective.
    • translation: I do hope that the global economic crisis and President Barack Obama will be corrected for a new Cold War.
    • problems:
      • Action Sender And Receiver Exchanged
      • Failed To Translate Complex Sentence
  • source: 人们纷纷猜测欧元区将崩溃。
    • target: Speculation about a possible breakup was widespread.
    • translation: The eurozone would collapse.
    • problems:
      • Significant Information Loss

Means to Improve the NMT model

  • Dataset
    • The dataset is fairly small, and our model is not being trained thorough all data
    • Being a native Chinese speaker, I could not understand what some of the source sentences are saying
    • The target sentences are not informational comprehensive; they themselves need context to be understood (e.g. the target sentence in the last "Bad Examples")
    • Even for human, some of the source sentence was too hard to translate
  • Model Architecture
    • CNN & Transformer
    • character based model
    • Make the model even larger & deeper (... I need GPUs)
  • Tricks that might help
    • Add a proper noun dictionary to translate unknown proper nouns word-by-word (phrase-by-phrase)
    • Initialize (sub)word embedding with pretrained embedding

How To Run

  • Download the dataset you desire, and change all "./zh_en_data" in run.sh to the path where your data is stored
  • To run locally on a CPU (mostly for sanity check, CPU is not able to train the model)
    • set up the environment using conda/miniconda conda env create --file local env.yml
  • To run on a GPU
    • set up the environment and running process following the Colab notebook

Contact

If you have any questions or you have trouble running the code, feel free to contact me via email

Owner
Zhenbang Feng
Be an engineer, not a coder. [email protected]
Zhenbang Feng
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
A fast and easy implementation of Transformer with PyTorch.

FasySeq FasySeq is a shorthand as a Fast and easy sequential modeling toolkit. It aims to provide a seq2seq model to researchers and developers, which

宁羽 7 Jul 18, 2022
Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts

gpt-2-simple A simple Python package that wraps existing model fine-tuning and generation scripts for OpenAI's GPT-2 text generation model (specifical

Max Woolf 3.1k Jan 07, 2023
ACL'22: Structured Pruning Learns Compact and Accurate Models

☕ CoFiPruning: Structured Pruning Learns Compact and Accurate Models This repository contains the code and pruned models for our ACL'22 paper Structur

Princeton Natural Language Processing 130 Jan 04, 2023
code for modular summarization work published in ACL2021 by Krishna et al

This repository contains the code for running modular summarization pipelines as described in the publication Krishna K, Khosla K, Bigham J, Lipton ZC

Kundan Krishna 6 Jun 04, 2021
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

Justin Terry 32 Nov 09, 2021
The guide to tackle with the Text Summarization

The guide to tackle with the Text Summarization

Takahiro Kubo 1.2k Dec 30, 2022
Fake Shakespearean Text Generator

Fake Shakespearean Text Generator This project contains an impelementation of stateful Char-RNN model to generate fake shakespearean texts. Files and

Recep YILDIRIM 1 Feb 15, 2022
【原神】自动演奏风物之诗琴的程序

疯物之诗琴 读取midi并自动演奏原神风物之诗琴。 可以自定义配置文件自动调整音符来适配风物之诗琴。 (原神1.4直播那天就开始做了!到现在才能放出来。。) 如何使用 在Release页面中下载打包好的程序和midi压缩包并解压。 双击运行“疯物之诗琴.exe”。 在原神中打开风物之诗琴,软件内输入

435 Jan 04, 2023
**NSFW** A chatbot based on GPT2-chitchat

DangBot -- 好怪哦,再来一句 卡群怪话bot,powered by GPT2 for Chinese chitchat Training Example: python train.py --lr 5e-2 --epochs 30 --max_len 300 --batch_size 8

Tommy Yang 11 Jul 21, 2022
Dual languaged (rus+eng) tool for packing and unpacking archives of Silky Engine.

SilkyArcTool English Dual languaged (rus+eng) GUI tool for packing and unpacking archives of Silky Engine. It is not the same arc as used in Ai6WIN. I

Tester 5 Sep 15, 2022
Model for recasing and repunctuating ASR transcripts

Recasing and punctuation model based on Bert Benoit Favre 2021 This system converts a sequence of lowercase tokens without punctuation to a sequence o

Benoit Favre 88 Dec 29, 2022
NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking

pretrain4ir_tutorial NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking 用作NLPIR实验室, Pre-training

ZYMa 12 Apr 07, 2022
A Practitioner's Guide to Natural Language Processing

Learn how to process, classify, cluster, summarize, understand syntax, semantics and sentiment of text data with the power of Python! This repository contains code and datasets used in my book, Text

Dipanjan (DJ) Sarkar 1.5k Jan 03, 2023
This repository contains the code for "Generating Datasets with Pretrained Language Models".

Datasets from Instructions (DINO 🦕 ) This repository contains the code for Generating Datasets with Pretrained Language Models. The paper introduces

Timo Schick 154 Jan 01, 2023
CodeBERT: A Pre-Trained Model for Programming and Natural Languages.

CodeBERT This repo provides the code for reproducing the experiments in CodeBERT: A Pre-Trained Model for Programming and Natural Languages. CodeBERT

Microsoft 1k Jan 03, 2023
This project uses unsupervised machine learning to identify correlations between daily inoculation rates in the USA and twitter sentiment in regards to COVID-19.

Twitter COVID-19 Sentiment Analysis Members: Christopher Bach | Khalid Hamid Fallous | Jay Hirpara | Jing Tang | Graham Thomas | David Wetherhold Pro

4 Oct 15, 2022
HiFi DeepVariant + WhatsHap workflowHiFi DeepVariant + WhatsHap workflow

HiFi DeepVariant + WhatsHap workflow Workflow steps align HiFi reads to reference with pbmm2 call small variants with DeepVariant, using two-pass meth

William Rowell 2 May 14, 2022
PyTorch implementation and pretrained models for XCiT models. See XCiT: Cross-Covariance Image Transformer

Cross-Covariance Image Transformer (XCiT) PyTorch implementation and pretrained models for XCiT models. See XCiT: Cross-Covariance Image Transformer L

Facebook Research 605 Jan 02, 2023
Lightweight utility tools for the detection of multiple spellings, meanings, and language-specific terminology in British and American English

Breame ( British English and American English) Breame is a lightweight Python package with a number of utility tools to aid in the detection of words

Charles 8 Oct 10, 2022