PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

Related tags

Deep Learningpicard
Overview


make it parse

build license

This is the official implementation of the following paper:

Torsten Scholak, Nathan Schucher, Dzmitry Bahdanau. PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP).

If you use this code, please cite:

@inproceedings{Scholak2021:PICARD,
  author = {Torsten Scholak and Nathan Schucher and Dzmitry Bahdanau},
  title = {PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models},
  booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
  year = {2021},
  publisher = {Association for Computational Linguistics},
}

Overview

This code implements:

  • The PICARD algorithm for constrained decoding from language models.
  • A text-to-SQL semantic parser based on pre-trained sequence-to-sequence models and PICARD achieving state-of-the-art performance on both the Spider and the CoSQL datasets.

About PICARD

TL;DR: We introduce PICARD -- a new method for simple and effective constrained decoding from large pre-trained language models. On the challenging Spider and CoSQL text-to-SQL datasets, PICARD significantly improves the performance of fine-tuned but otherwise unmodified T5 models. Using PICARD, our T5-3B models achieved state-of-the-art performance on both Spider and CoSQL.

In text-to-SQL translation, the goal is to translate a natural language question into a SQL query. There are two main challenges to this task:

  1. The generated SQL needs to be semantically correct, that is, correctly reflect the meaning of the question.
  2. The SQL also needs to be valid, that is, it must not result in an execution error.

So far, there has been a trade-off between these two goals: The second problem can be solved by using a special decoder architecture that -- by construction -- always produces valid SQL. This is the approach taken by most prior work. Those decoders are called "constrained decoders", and they need to be trained from scratch on the text-to-SQL dataset. However, this limits the generality of the decoders, which is a problem for the first goal.

A better approach would be to use a pre-trained encoder-decoder model and to constrain its decoder to produce valid SQL after fine-tuning the model on the text-to-SQL task. This is the approach taken by the PICARD algorithm.

How is PICARD different from existing constrained decoders?

  • It’s an incremental parsing algorithm that integrates with ordinary beam search.
  • It doesn’t require any training.
  • It doesn’t require modifying the model.
  • It works with any model that generates a sequence of tokens (including language models).
  • It doesn’t require a special vocabulary.
  • It works with character-, sub-word-, and word-level language models.

How does PICARD work?

The following picture shows how PICARD is integrated with beam search.



Decoding starts from the left and proceeds to the right. The algorithm begins with a single token (usually <s>), and then keeps expanding the beam with hypotheses generated token-by-token by the decoder. At each decoding step and for each hypothesis, PICARD checks whether the next top-k tokens are valid. In the image above, only 3 token predictions are shown, and k is set to 2. Valid tokens () are added to the beam. Invalid ones (☒) are discarded. The k+1-th, k+2-th, ... tokens are discarded, too. Like in normal beam search, the beam is pruned to contain only the top-n hypotheses. n is the beam size, and in the image above it is set to 2 as well. Hypotheses that are terminated with the end-of-sentence token (usually </s>) are not expanded further. The algorithm stops when the all hypotheses are terminated or when the maximum number of tokens has been reached.

How does PICARD know whether a token is valid?

In PICARD, checking, accepting, and rejecting of tokens and token sequences is achieved through parsing. Parsing means that we attempt to assemble a data structure from the tokens that are currently in the beam or are about to be added to it. This data structure (and the parsing rules that are used to build it) encode the constraints we want to enforce.

In the case of SQL, the data structure we parse to is the abstract syntax tree (AST) of the SQL query. The parsing rules are defined in a computer program called a parser. Database engines, such as PostgreSQL, MySQL, and SQLite, have their own built-in parser that they use internally to process SQL queries. For Spider and CoSQL, we have implemented a parser that supports a subset of the SQLite syntax and that checks additional constraints on the AST. In our implementation, the parsing rules are made up from simpler rules and primitives that are provided by a third-party parsing library.

PICARD uses a parsing library called attoparsec that supports incremental input. This is a special capability that is not available in many other parsing libraries. You can feed attoparsec a string that represents only part of the expected input to parse. When parsing reaches the end of an input fragment, attoparsec will return a continuation function that can be used to continue parsing. Think of the continuation function as a suspended computation that can be resumed later. Input fragments can be parsed one after the other when they become available until the input is complete.

Herein lies the key to PICARD: Incremental parsing of input fragments is exactly what we need to check tokens one by one during decoding.

In PICARD, parsing is initialized with an empty string, and attoparsec will return the first continuation function. We then call that continuation function with all the token predictions we want to check in the first decoding step. For those tokens that are valid, the continuation function will return a new continuation function that we can use to continue parsing in the next decoding step. For those tokens that are invalid, the continuation function will return a failure value which cannot be used to continue parsing. Such failures are discarded and never end up in the beam. We repeat the process until the end of the input is reached. The input is complete once the model predicts the end-of-sentence token. When that happens, we finalize the parsing by calling the continuation function with an empty string. If the parsing is successful, it will return the final AST. If not, it will return a failure value.

The parsing rules are described at a high level in the PICARD paper. For details, see the PICARD code, specifically the Language.SQL.SpiderSQL.Parse module.

How well does PICARD work?

Let's look at the numbers:

On Spider

URL Exact-set Match Accuracy Execution Accuracy
Dev Test Dev Test
tscholak/cxmefzzi w PICARD 75.5 % 71.9 % 79.3 % 75.1 %
tscholak/cxmefzzi w/o PICARD 71.5 % 68.0 % 74.4 % 70.1 %

Click on the links to download the model.

On CoSQL Dialogue State Tracking

URL Question Match Accuracy Interaction Match Accuracy
Dev Test Dev Test
tscholak/2e826ioa w PICARD 56.9 % 54.6 % 24.2 % 23.7 %
tscholak/2e826ioa w/o PICARD 53.8 % 51.4 % 21.8 % 21.7 %

Click on the links to download the model.

Quick Start

Prerequisites

This repository uses git submodules. Clone it like this:

$ git clone [email protected]:ElementAI/picard.git
$ cd picard
$ git submodule update --init --recursive

Training

The training script is located in seq2seq/run_seq2seq.py. You can run it with:

$ make train

The model will be trained on the Spider dataset by default. You can also train on CoSQL by running make train-cosql.

The training script will create the directory train in the current directory. Training artifacts like checkpoints will be stored in this directory.

The default configuration is stored in configs/train.json. The settings are optimized for a GPU with 40GB of memory.

These training settings should result in a model with at least 71% exact-set-match accuracy on the Spider development set. With PICARD, the accuracy should go up to at least 75%.

We have uploaded a model trained on the Spider dataset to the huggingface model hub, tscholak/cxmefzzi. A model trained on the CoSQL dialog state tracking dataset is available, too, tscholak/2e826ioa.

Evaluation

The evaluation script is located in seq2seq/run_seq2seq.py. You can run it with:

$ make eval

By default, the evaluation will be run on the Spider evaluation set. Evaluation on the CoSQL evaluation set can be run with make eval-cosql.

The evaluation script will create the directory eval in the current directory. The evaluation results will be stored there.

The default configuration is stored in configs/eval.json.

Docker

There are three docker images that can be used to run the code:

  • tscholak/text-to-sql-dev: Base image with development dependencies. Use this for development. Pull it with make pull-dev-image from the docker hub. Rebuild the image with make build-dev-image.
  • tsscholak/text-to-sql-train: Training image with development dependencies but without Picard dependencies. Use this for fine-tuning a model. Pull it with make pull-train-image from the docker hub. Rebuild the image with make build-train-image.
  • tscholak/text-to-sql-eval: Training/evaluation image with all dependencies. Use this for evaluating a fine-tuned model with Picard. This image can also be used for training if you want to run evaluation during training with Picard. Pull it with make pull-eval-image from the docker hub. Rebuild the image with make build-eval-image.

All images are tagged with the current commit hash. The images are built with the buildx tool which is available in the latest docker-ce. Use make init-buildkit to initialize the buildx tool on your machine. You can then use make build-dev-image, make build-train-image, etc. to rebuild the images. Local changes to the code will not be reflected in the docker images unless they are committed to git.

Owner
ElementAI
ElementAI
UpChecker is a simple opensource project to host it fast on your server and check is server up, view statistic, get messages if it is down. UpChecker - just run file and use project easy

UpChecker UpChecker is a simple opensource project to host it fast on your server and check is server up, view statistic, get messages if it is down.

Yan 4 Apr 07, 2022
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

demonsjin 58 Dec 06, 2022
Classical OCR DCNN reproduction based on PaddlePaddle framework.

Paddle-SVHN Classical OCR DCNN reproduction based on PaddlePaddle framework. This project reproduces Multi-digit Number Recognition from Street View I

1 Nov 12, 2021
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
A clean and extensible PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners

A clean and extensible PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners A PyTorch re-implementation of Mask Autoencoder trai

Tianyu Hua 23 Dec 13, 2022
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

101 Nov 25, 2022
PyTorch implementation for STIN

STIN This repository contains PyTorch implementation for STIN. Abstract: In single-photon LiDAR, photon-efficient imaging captures the 3D structure of

Yiweins 2 Nov 22, 2022
Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module

Invariant Point Attention - Pytorch Implementation of Invariant Point Attention as a standalone module, which was used in the structure module of Alph

Phil Wang 113 Jan 05, 2023
I tried to apply the CAM algorithm to YOLOv4 and it worked.

YOLOV4:You Only Look Once目标检测模型在pytorch当中的实现 2021年2月7日更新: 加入letterbox_image的选项,关闭letterbox_image后网络的map得到大幅度提升。 目录 性能情况 Performance 实现的内容 Achievement

55 Dec 05, 2022
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.

Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition This is the research repository for Vid2

Future Interfaces Group (CMU) 26 Dec 24, 2022
Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation

Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation The skip connections in U-Net pass features from the levels of enc

Boheng Cao 1 Dec 29, 2021
Embeds a story into a music playlist by sorting the playlist so that the order of the music follows a narrative arc.

playlist-story-builder This project attempts to embed a story into a music playlist by sorting the playlist so that the order of the music follows a n

Dylan R. Ashley 0 Oct 28, 2021
PyTorch code accompanying the paper "Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning" (NeurIPS 2021).

HIGL This is a PyTorch implementation for our paper: Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning (NeurIPS 2021). Our cod

Junsu Kim 20 Dec 14, 2022
Discord bot-CTFD-Thread-Parser - Discord bot CTFD-Thread-Parser

Discord bot CTFD-Thread-Parser Description: This tools is used to create automat

15 Mar 22, 2022
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"

Adversarial Neuron Pruning Purifies Backdoored Deep Models Code for NeurIPS 2021 "Adversarial Neuron Pruning Purifies Backdoored Deep Models" by Dongx

Dongxian Wu 31 Dec 11, 2022
An open-source, low-cost, image-based weed detection device for fallow scenarios.

Welcome to the OpenWeedLocator (OWL) project, an opensource hardware and software green-on-brown weed detector that uses entirely off-the-shelf compon

Guy Coleman 145 Jan 05, 2023
Xview3 solution - XView3 challenge, 2nd place solution

Xview3, 2nd place solution https://iuu.xview.us/ test split aggregate score publ

Selim Seferbekov 24 Nov 23, 2022
working repo for my xumx-sliCQ submissions to the ISMIR 2021 MDX

Music Demixing Challenge - xumx-sliCQ This repository is the GitHub mirror of my working submission repository for the AICrowd ISMIR 2021 Music Demixi

4 Aug 25, 2021
Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

Auto-Seg-Loss By Hao Li, Chenxin Tao, Xizhou Zhu, Xiaogang Wang, Gao Huang, Jifeng Dai This is the official implementation of the ICLR 2021 paper Auto

61 Dec 21, 2022
Code for the paper: Sketch Your Own GAN

Sketch Your Own GAN Project | Paper | Youtube Our method takes in one or a few hand-drawn sketches and customizes an off-the-shelf GAN to match the in

677 Dec 28, 2022