Guide to using pre-trained large language models of source code

Overview

Large Models of Source Code

I occasionally train and publicly release large neural language models on programs, including PolyCoder. Here, I describe how to use these.

  1. Setup
  2. Models (incl. PolyCoder)
  3. Datasets
  4. Evaluation
  5. How to cite

Getting Started

All current models were trained using the GPT NeoX toolkit. First, download a pretrained checkpoint as described below and then use this either with a Docker image or through our fork of this toolkit from source to generate code or replicate our evaluation.

Retrieving Checkpoints

Checkpoint files for training PolyCoder are hosted on this public Zenodo repository. See this section for details on currently available models. Model checkpoints range up to 6GB, which is also the amount of GPU memory they require to run (running on CPU is neither tested nor recommended). Download and untar a checkpoint file (in this case for a 2.7B parameter model trained for 150K steps) to a directory called checkpoints/, using:

mkdir checkpoints
cd checkpoints
wget https://zenodo.org/record/6363556/files/2-7B-150K.tar
tar -xvf 2-7B-150K.tar

From Source

We maintain a public fork of the NeoX repository here, which includes the (minor) changes we made to the codebase to allow for tabs & newlines in the tokenization, and also includes instructions for running the perplexity and HumanEval tasks. Note that this repository uses a forked version of the LM Evaluation Harness with the code benchmark from our work.

Building this repository should match the process for GPT-NeoX almost exactly. You may also use the Docker image mentioned next, but mounting a checkout of the latest version of this fork over the /gpt-neox directory inside the container. Once set up generate.py entrypoint (described below) for free-form code generation, or use one of the commands here to calculate perplexity and HumanEval results as in the paper.

Via Docker

A base Docker image containing a slightly modified version of the gpt-neox repository is available via DockerHub:

docker pull vhellendoorn/code-lms-neox:base

This image can be used together with a checkpoint file hosted on this public Zenodo repository. The base Docker image size is 5.4GB. Once a checkpoint has been retrieved, start the container with the following commands (substituting another GPU device index if needed):

nvidia-docker run --rm -it -e NVIDIA_VISIBLE_DEVICES=0 --shm-size=1g --ulimit memlock=-1 --mount type=bind,src=$PWD/checkpoints,dst=/gpt-neox/checkpoints vhellendoorn/code-lms-neox:base

Code Generation

The following command can be used to generate code from a prompt:

sudo ./deepy.py generate.py configs/text_generation.yml checkpoints/configs/local_setup.yml checkpoints/configs/2-7B.yml

Note: if not using the 2.7B parameter model, replace the final config file with the appropriate model size (e.g., small = 160M parameters, medium = 405M).

Once the checkpoint has been loaded, you can feed it an example such as def return1():\n """Returns 1."""\n (note the whitespace tokens) and watch it predict return 1 (and then probably a bunch of other returnX methods, depending on the sample).

The modifications to gpt-neox mentioned above center around the need to allow tabs and newlines in the prompt input. For the interactive mode, these can be added using their escaped versions (\t, \n); when using file-based input, the project will read the entire file instead of treating each line as a prompt. By default, the command below will create an interactive prompt and return relatively short outputs (256 tokens) with a sampling temperature of 0.5; this behavior can be changed in /gpt-neox/checkpoints/configs/text_generation.yml.

A lower temperature (e.g., 0.2) will produce more consistent and plausible (to the model) predictions; a higher temperature such as the default may be useful for generating and evaluating many candidates (see our paper for recommendations). For the latter setting, consider switching to the input-file mode and providing an entire snippet (without escaping whitespace) in the corresponding file

Multi-lingual Models

Several models have been trained on a large corpus of code spanning 12 programming languages. This includes a 2.7B parameter model (nick-named PolyCoder, trained for 100K and 150K steps), a 405M parameter model (100K & 150K steps) and a 160M parameter model (150K steps).

Available Models

All models are available at a public Zenodo repository, in the form of .tar files with fairly self-explanatory names (e.g., 2-7B-100K => a 2.7B parameter model trained for 100K steps). Currently available models include:

  • GPT2 - 2.7B: A 32 layer, 2,560 dimensional Transformer model, trained with a batch size of 128 sequences (256K tokens). Models available both at 100K and at 150K steps steps.
    • Note that GPT-Neox' default config for this model was modified to reduce the number of training steps (and learning rate decay steps accordingly) to 160K, down from 320K, to better match the available training resources. Hence, this model may not have reached its peak performance.
  • GPT2 - 0.4B: A 24 layer, 1,024 dimensional Transformer model based on the medium config, trained with 256K tokens per batch.
  • GPT2 - 160M: A 12 layer, 768 dimensional Transformer model based on the small config, trained with 256K tokens per batch.

Training Process

Training was done on 4 to 8 NVIDIA RTX 8000 GPUs, largely following the standard config values, except also enabling "scaled-upper-triang-masked-softmax-fusion" and "bias-gelu-fusion" for performance and slightly changing the batch size (see model details), data split (changed to 98.9%, 0.1%, 1%), initial loss scale (2^16), and print/eval intervals.

The below image shows the loss curve of the various models' training process in terms of validation loss. image

Caveats

The trained models come with a few minor known limitations:

  • This model was not trained to solve programming problems and may not perform well on a benchmark such as HumanEval. Models like Codex (powering Copilot) are pretrained on natural language, which may boost their ability to interpret NL prompts; this model only learned language from comments in code.
  • The model appears to start generating a random new file once it reaches the (predicted) end of the current one. It is possible that the end-of-document token was not properly added to the training data.
  • Whitespace is very important to the model, since no preprocessing was done on the input files. For instance, the following snippet will yield poor predictions, because in Java we would never expect an instance-method at the top-level, as is indicated by the single level of (\t) indentation of the two lines within this method:
public int getTotalWeight(List<Integer> weights) {\n\t// Sum weights in parallel.\n\treturn 

Adjusting the indentation makes it predict more reasonable continuations:

public int getTotalWeight(List<Integer> weights) {\n\t\t// Sum weights in parallel.\n\t\treturn 

The Codex model discusses controlling for this to increase usability; this may be worth doing in a future version of the model.

Datasets

249GB Multi-Lingual Corpus

This is the corpus used to train PolyCoder.

The datasets were cloned overnight on October 9-10, 2021. To mine a similar training set, see Data.

The list of file paths can be downloaded from: https://zenodo.org/record/6363556/files/index.zip. Each row in the file is the file path along with its SHA-256 hash, to ease deduplication. That is, the hashes allow checking if files from any future test set were already contained in the training set.

The data collection and filtering process is described in detail in the paper and below. The final, filtered dataset statistics are:

Language Repositories Size(GB) Files
C 10,749 55G 3,037,112
C# 9,511 21G 2,514,494
C++ 13,726 52G 4,289,506
Go 12,371 15G 1,416,789
Java 15,044 41G 5,120,129
JavaScript 25,144 22G 1,774,174
PHP 9,960 13G 1,714,058
Python 25,446 16G 1,550,208
Ruby 5,826 4.1G 674,343
Rust 4,991 3.5G 304,842
Scala 1,497 1.8G 245,100
TypeScript 12,830 9.2G 1,441,926

Data Collection & Filtering

I cloned the most popular repositories for 12 popular programming languages with at least 50 stars (stopping at ~25K per langauge) from GitHub in October 2021. For each project, each file belonging to the majority-language of that project was extracted, yielding the training set below (after cleaning). This initial, unfiltered dataset spanned 631GB and 38.9M files.

Next, similar to Codex and CodeParrot, very large (>1MB) and very short (<100 tokens) files were filtered out, reducing the dataset to 424GB. Files were then deduplicated based on a hash of their content, which reduced the number of files by another 30% or so, leaving 249GB of data and 24.1M files. No tokenization filters were applied; the model processes entire files including all comments. A code-specific vocabulary was constructed on a random 5% subset of the files above.

Evaluation

Please find detailed instructions for replicating our perplexity and HumanEval results on our public fork of the NeoX repository. This in turn leverages our extension of the LM Evaluation Harness.

Evaluating Codex

To download the test sets that we used in the paper (12 programming languages), use:

wget https://zenodo.org/record/6363556/files/unseen_test_sets.tar.gz
tar -xvzf unseen_test_sets.tar.gz

To get perplexity results on these samples using Codex' API, use:

export OPENAI_API_KEY=<YOUR OPEN AI API KEY>
python3 -u Evaluation/eval_codex_all.py --dirs Code-sampled100

Where <YOUR OPEN AI API KEY> is a private string that can be obtained by signing up for OpenAI's beta.

As of March 2022, getting an API Key is free for 3 months, and afterwards a credit card needs to be entered. However, even after entering a credit card, using our evaluation script does not lead to any costs.

Results - HumanEval

These are PolyCoder's results on the HumanEval benchmark:

Model [email protected] [email protected] [email protected]
PolyCoder (160M) 2.13% 3.35% 4.88%
PolyCoder (400M) 2.96% 5.29% 11.59%
PolyCoder (2.7B) 5.59% 9.87% 17.68%
CodeParrot (110M) 3.80% 6.57% 12.78%
CodeParrot (1.5B) 3.58% 8.03% 14.96%
GPT-Neo (125M) 0.75% 1.88% 2.97%
GPT-Neo (1.3B) 4.79% 7.47% 16.30%
GPT-Neo (2.7B) 6.41% 11.27% 21.37%
GPT-J (6B) 11.62% 15.74% 27.74%
Codex (300M) 13.17% 20.37% 36.27%
Codex (2.5B) 21.36% 35.42% 59.50%
Codex (12B) 28.81% 46.81% 72.31%

Results - Multilingual Language Modeling

These are the perplexity results of PolyCoder on the multilingual test sets:

Language Perplexity
C 2.3464
C# 2.5832
C++ 2.9189
Go 2.567
Java 2.9194
JavaScript 3.0611
PHP 3.6954
Python 3.1767
Ruby 3.9742
Rust 3.2449
Scala 3.8735
TypeScript 3.6143

A comparison with the other models is available in Figure 6 in the paper: image

Citation

A Systematic Evaluation of Large Language Models of Code

@article{xu2022systematic,
  title={A Systematic Evaluation of Large Language Models of Code},
  author={Xu, Frank F and Alon, Uri and Neubig, Graham and Hellendoorn, Vincent J},
  journal={arXiv preprint arXiv:2202.13169},
  year={2022}
}
Owner
Vincent Hellendoorn
AI4SE Researcher, Assistant Prof. at CMU
Vincent Hellendoorn
Pipeline for training LSA models using Scikit-Learn.

Latent Semantic Analysis Pipeline for training LSA models using Scikit-Learn. Usage Instead of writing custom code for latent semantic analysis, you j

Dani El-Ayyass 23 Sep 05, 2022
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 1.2k Jan 08, 2023
Use fastai-v2 with HuggingFace's pretrained transformers

FastHugs Use fastai v2 with HuggingFace's pretrained transformers, see the notebooks below depending on your task: Text classification: fasthugs_seq_c

Morgan McGuire 111 Nov 16, 2022
[WWW 2021 GLB] New Benchmarks for Learning on Non-Homophilous Graphs

New Benchmarks for Learning on Non-Homophilous Graphs Here are the codes and datasets accompanying the paper: New Benchmarks for Learning on Non-Homop

94 Dec 21, 2022
मराठी भाषा वाचविण्याचा एक प्रयास. इंग्रजी ते मराठीचा शब्दकोश. An attempt to preserve the Marathi language. A lightweight and ad free English to Marathi thesaurus.

For English, scroll down मराठी शब्द मराठी भाषा वाचवण्यासाठी मी हा ओपन सोर्स प्रोजेक्ट सुरू केला आहे. माझ्या मते, आपली भाषा हळूहळू आणि कोणाचाही लक्षात

मुक्त स्त्रोत 20 Oct 11, 2022
Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃

This repository provides a library for efficient training of masked language models (MLM), built with fairseq. We fork fairseq to give researchers mor

Princeton Natural Language Processing 92 Dec 27, 2022
Must-read papers on improving efficiency for pre-trained language models.

Must-read papers on improving efficiency for pre-trained language models.

Tobias Lee 89 Jan 03, 2023
A Chinese to English Neural Model Translation Project

ZH-EN NMT Chinese to English Neural Machine Translation This project is inspired by Stanford's CS224N NMT Project Dataset used in this project: News C

Zhenbang Feng 29 Nov 26, 2022
The Sudachi synonym dictionary in Solar format.

solr-sudachi-synonyms The Sudachi synonym dictionary in Solar format. Summary Run a script that checks for updates to the Sudachi dictionary every hou

Karibash 3 Aug 19, 2022
Searching keywords in PDF file folders

keyword_searching Steps to use this Python scripts: (1)Paste this script into the file folder containing the PDF files you need to search from; (2)Thi

1 Nov 08, 2021
CCF BDCI BERT系统调优赛题baseline(Pytorch版本)

CCF BDCI BERT系统调优赛题baseline(Pytorch版本) 此版本基于Pytorch后端的huggingface进行实现。由于此实现使用了Oneflow的dataloader作为数据读入的方式,因此也需要安装Oneflow。其它框架的数据读取可以参考OneflowDataloade

Ziqi Zhou 9 Oct 13, 2022
null

CP-Cluster Confidence Propagation Cluster aims to replace NMS-based methods as a better box fusion framework in 2D/3D Object detection, Instance Segme

Yichun Shen 41 Dec 08, 2022
LSTC: Boosting Atomic Action Detection with Long-Short-Term Context

LSTC: Boosting Atomic Action Detection with Long-Short-Term Context This Repository contains the code on AVA of our ACM MM 2021 paper: LSTC: Boosting

Tencent YouTu Research 9 Oct 11, 2022
Research code for the paper "Fine-tuning wav2vec2 for speaker recognition"

Fine-tuning wav2vec2 for speaker recognition This is the code used to run the experiments in https://arxiv.org/abs/2109.15053. Detailed logs of each t

Nik 103 Dec 26, 2022
Connectionist Temporal Classification (CTC) decoding algorithms: best path, beam search, lexicon search, prefix search, and token passing. Implemented in Python.

CTC Decoding Algorithms Update 2021: installable Python package Python implementation of some common Connectionist Temporal Classification (CTC) decod

Harald Scheidl 736 Jan 03, 2023
Implementaion of our ACL 2022 paper Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation

Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation This is the implementaion of our paper: Bridging the

hezw.tkcw 20 Dec 12, 2022
Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.

Linear Transformers Are Secretly Fast Weight Programmers This repository contains the code accompanying the paper Linear Transformers Are Secretly Fas

Imanol Schlag 77 Dec 19, 2022
📔️ Generate a text-based journal from a template file.

JGen 📔️ Generate a text-based journal from a template file. Contents Getting Started Example Overview Usage Details Reserved Keywords Gotchas Getting

Harrison Broadbent 21 Sep 25, 2022
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Bethge Lab 61 Dec 21, 2022
⚡ Automatically decrypt encryptions without knowing the key or cipher, decode encodings, and crack hashes ⚡

Translations 🇩🇪 DE 🇫🇷 FR 🇭🇺 HU 🇮🇩 ID 🇮🇹 IT 🇳🇱 NL 🇧🇷 PT-BR 🇷🇺 RU 🇨🇳 ZH ➡️ Documentation | Discord | Installation Guide ⬅️ Fully autom

11.2k Jan 05, 2023