Weakly Supervised Text-to-SQL Parsing through Question Decomposition

Overview

Weakly Supervised Text-to-SQL Parsing through Question Decomposition

The official repository for the paper "Weakly Supervised Text-to-SQL Parsing through Question Decomposition" by Tomer Wolfson, Daniel Deutch and Jonathan Berant, accepted to the Finings of NAACL 2022.

This repository contains the code and data used in our paper:

  1. Code for automatically synthesizing SQL queries from question decompositions + answers
  2. Code for the models used in our paper mapping text-to-SQL and text-to-QDMR

Setup ๐Ÿ™Œ๐Ÿผ

  1. Create the virtual environment
conda create -n [ENV_NAME] python=3.8
conda activate [ENV_NAME]
  1. Clone the repository
git clone https://github.com/tomerwolgithub/question-decomposition-to-sql
cd question-decomposition-to-sql
  1. Install the relevant requirements
pip install -r requirements.txt 
python -m spacy download en_core_web_lg
  1. To train the QDMR parser model please setup a separate environment (due to different Hugginface versions):
conda create -n qdmr_parser_env python=3.8
conda activate qdmr_parser_env
pip install -r requirements_qdmr_parser.txt 
python -m spacy download en_core_web_lg

Download Resources ๐Ÿ—๏ธ

1. QDMR Parsing Datasets:

2. Text-to-SQL Datasets:

3. Databases (schema & contents):

Convert the MySQL databases of Academic, IMDB, Yelp and GeoQuery to sqlite format using the tool of Jean-Luc Lacroix:

./mysql2sqlite academic_mysql.sql | sqlite3 academic_sqlite.db

Data Generation ๐Ÿ”จ

Our SQL synthesis is given examples of <QDMR, database, answer> and automatically generates a SQL that executes to the correct answer. The QDMR decompositions are either manually annotated or automatically predicted by a trained QDMR parser.

Begin by copying all relevant sqlite databases to the data_generation directory.

mkdir data_generation/data
mkdir data_generation/data/spider_databases # copy Spider databases here
mkdir data_generation/data/other_databases # copy Academic, IMDB, Yelp and Geo databases here
  1. The SQL synthesis expects a formatted csv file, see example. Note that the SQL query in these files is only used to compute the answer.
  2. This may take several hours, as multiple candidate SQL are being executed on their respective database.
  3. To synthesize SQL from the <QDMR, database, answer> examples run:
python data_generation/main.py \
--input_file input_qdmr_examples.csv \
--output_file qdmr_grounded_sql.csv \
--json_steps True

Synthesized Data

The SQL synthesized using QDMR + answer supervision is available for each dataset in the data/sql_synthesis_results/ directory.

  • data/sql_synthesis_results/gold_qdmr_supervision: contains SQL synthesized using gold QDMRs that are manually annotated
  • data/sql_synthesis_results/predicted_qdmr_supervision: contains SQL synthesized using QDMRs predicted by a trained parser

Models ๐Ÿ—‚๏ธ

QDMR Parser

The QDMR parser is a T5-large sequence-to-sequence model that is finetuned to map questions to their QDMR. The model expects as input two csv files as its train and dev sets. Use the files from the downloaded Break dataset to train the parser. Make sure that you are in the relevant python environment (requirements_qdmr_parser.txt).

To train the QDMR parser configure the following parameters in train.py:

  • data_dir: the path to the directory containing the NL to QDMR datasets
  • training_set_file: name of the train set csv (e.g. break_train.csv)
  • dev_set_file: name of the dev set csv (e.g. break_dev.csv)
  • output_dir: the directory to store the trained model

After configuration, train the model as follows:

TOKENIZERS_PARALLELISM=false CUDA_VISIBLE_DEVICES=0 python src/qdmr_parser/train.py

To test a trained model and store its predictions, configure the following parameters in test.py:

  • checkpoint_path: path to the trained QDMR parser model to be evaluated
  • dev_set_file: name of the dev set csv to generate predictions for
  • predictions_output_file: the output file to store the parser's generated predictions

And run the following command:

TOKENIZERS_PARALLELISM=false CUDA_VISIBLE_DEVICES=0 python src/qdmr_parser/test.py

Text-to-SQL

The text-to-SQL models are T5-large sequence-to-sequence models, finetuned to map questions to executable SQL queries. We compare the models trained on gold SQL queries, annotated by experts, to our synthesized SQL from QDMR and answer supervision.

1. Setup directory

Setup the data for the text-to-SQL experiments as follows:

data
โ”œโ”€โ”€ tables.json			# Spider tables.json
โ””โ”€โ”€ databases
โ”‚   โ””โ”€โ”€ academic			
โ”‚       โ””โ”€โ”€ academic.sqlite	# Sqlite version of the populated Academic database (see downloads)
โ”‚   โ””โ”€โ”€ geo			
โ”‚       โ””โ”€โ”€ geo.sqlite		# Sqlite version of the populated Geo database (see downloads)
โ”‚   โ””โ”€โ”€ imdb			
โ”‚       โ””โ”€โ”€ imdb.sqlite		# Sqlite version of the populated IMDB database (see downloads)
โ”‚   โ””โ”€โ”€ spider_databases 	# Spider databases directory
โ”‚       โ””โ”€โ”€ activity_1
โ”‚           โ””โ”€โ”€ activity_1.sqlite
โ”‚       โ””โ”€โ”€ ...   
โ”‚   โ””โ”€โ”€ yelp			
โ”‚       โ””โ”€โ”€ yelp.sqlite		# Sqlite version of the populated Yelp database (see downloads)
โ””โ”€โ”€ queries
    โ””โ”€โ”€ geo	# See experiments data
        โ”œโ”€โ”€ geo_qdmr_train.json
	โ””โ”€โ”€ geo_qdmr_predicted_train.json
	โ””โ”€โ”€ geo_gold_train.json
	โ””โ”€โ”€ geo_gold_dev.json
	โ””โ”€โ”€ geo_gold_test.json
	โ””โ”€โ”€ geo_gold_train.sql
	โ””โ”€โ”€ geo_gold_dev.sql
	โ””โ”€โ”€ geo_gold_test.sql
    โ””โ”€โ”€ spider
        โ”œโ”€โ”€ spider_qdmr_train.json		# See experiments data
	โ””โ”€โ”€ spider_qdmr_predicted_train.json 	# See experiments data
	โ””โ”€โ”€ spider_gold_train.json 	# Spider training set
	โ””โ”€โ”€ spider_gold_dev.json 	# Spider dev set
	โ””โ”€โ”€ spider_gold_train.sql 	# Spider training set SQL queries
	โ””โ”€โ”€ spider_gold_dev.sql 	# Spider dev set SQL queries

Database files are described in the downloads section. See the experiments section for the exact train and test files.

2. Train model

To train the text-to-SQL model configure its following parameters in train.py:

  • dataset: either spider or geo
  • target_encoding: sql for gold sql and either qdmr_formula or qdmr_sql for the QDMR experiments
  • data_dir: path to the directory containing the experiments data
  • output_dir: the directory to store the trained model
  • db_dir: the directory to store the trained model
  • training_set_file: training set file in the data directory e.g. spider/spider_gold_train.json
  • dev_set_file: dev set file in the data directory e.g. spider/spider_gold_dev.json
  • dev_set_sql: dev set SQL queries in the data directory e.g. spider/spider_gold_dev.sql

Following configuration, to train the model run:

CUDA_VISIBLE_DEVICES=0 python train.py 

3. Test model

To test the text-to-SQL model first configure the relevant parameters and checkpoint_path in test.py. Following the configuration, generate the trained model predictions using:

CUDA_VISIBLE_DEVICES=0 python test.py 

Experiments โš—๏ธ

Data

Gold SQL:

For the Spider experiments we use its original train and dev json and sql files. For Geo880, Academic, IMDB and Yelp we format the original datasets in json files available here.

QDMR Synthesized SQL:

The QDMR text-to-SQL models are not trained directly on the synthesized SQL. Instead, we train on an encoded QDMR representation with its phrase-DB linking (from the SQL synthesis). This representation is automatically mapped to SQL to evaluate the models execution accuracy. To generate these grounded QDMRs we use the output of the data generation phase. The function encoded_grounded_qdmr in src/data_generation/write_encoding.py recieves the json file containing the synthesized SQL examples. It then encodes them as lisp style formulas of QDMR steps and their relevant phrase-DB linking.

For convenience, you can download the encoded QDMR training sets used in our experiments here. These include:

  • qdmr_ground_enc_spider_train.json: 5,349 examples, synthesized using gold QDMR + answer supervision
  • qdmr_ground_enc_predicted_spider_train_few_shot: 5,075 examples, synthesized examples using 700 gold QDMRs, predicted QDMR + answer supervision
  • qdmr_ground_enc_predicted_spider_train_30_db.json: 1,129 examples, synthesized using predicted QDMR + answer supervision
  • qdmr_ground_enc_predicted_spider_train_40_db.json: 1,440 examples, synthesized using predicted QDMR + answer supervision
  • qdmr_ground_enc_predicted_spider_train_40_db_V2.json: 1,552 examples, synthesized using predicted QDMR + answer supervision
  • qdmr_ground_enc_geo880_train.json: 454 examples, synthesized using gold QDMR + answer supervision
  • qdmr_ground_enc_predicted_geo_train_zero_shot.json: 432 examples, synthesized using predicted QDMR + answer supervision

Configurations

The configurations for training the text-to-SQL models on Spider. Other parameters are fixed in train.py.

SQL Gold (Spider):

{'dataset': 'spider',
'target_encoding': 'sql',
'db_dir': 'databases/spider_databases',
'training_set_file': 'queries/spider/spider_gold_train.json',
'dev_set_file': 'queries/spider/spider_gold_dev.json',
'dev_set_sql': 'queries/spider/spider_gold_dev.sql'}

QDMR Gold (Spider):

{'dataset': 'spider',
'target_encoding': 'qdmr_formula',
'db_dir': 'databases/spider_databases',
'training_set_file': 'queries/spider/spider_qdmr_train.json',
'dev_set_file': 'queries/spider/spider_gold_dev.json',
'dev_set_sql': 'queries/spider/spider_gold_dev.sql'}

SQL Predicted (Spider):

{'dataset': 'spider',
'target_encoding': 'qdmr_formula',
'db_dir': `databases/spider_databases',
'training_set_file': 'queries/spider/spider_qdmr_predicted_train.json',
'dev_set_file': 'queries/spider/spider_gold_dev.json',
'dev_set_sql': 'queries/spider/spider_gold_dev.sql'}

The configurations for training the text-to-SQL models on Geo880.

SQL Gold (Geo):

{'dataset': 'geo',
'target_encoding': 'sql',
'db_dir': 'databases',
'training_set_file': 'queries/geo/geo_gold_train.json',
'dev_set_file': 'queries/spider/geo_gold_dev.json',
'dev_set_sql': 'queries/spider/geo_gold_dev.sql'}

QDMR Gold (Geo):

{'dataset': 'geo',
'target_encoding': 'qdmr_sql',
'db_dir': 'databases',
'training_set_file': 'queries/geo/geo_qdmr_train.json',
'dev_set_file': 'queries/spider/geo_gold_dev.json',
'dev_set_sql': 'queries/spider/geo_gold_dev.sql'}

QDMR Predicted (Geo):

{'dataset': 'geo',
'target_encoding': 'qdmr_sql',
'db_dir': 'databases',
'training_set_file': 'queries/geo/geo_qdmr_predicted_train.json',
'dev_set_file': 'queries/spider/geo_gold_dev.json',
'dev_set_sql': 'queries/spider/geo_gold_dev.sql'}

Evaluation

Text-to-SQL model performance is evaluated using SQL execution accuracy in src/text_to_sql/eval_spider.py. The script automatically converts encoded QDMR predictions to SQL before executing them on the target database.

Citation โœ๐Ÿฝ

bibtex
@inproceedings{wolfson-etal-2022-weakly,
    title={"Weakly Supervised Text-to-SQL Parsing through Question Decomposition"},
    author={"Wolfson, Tomer and Deutch, Daniel and Berant, Jonathan"},
    booktitle = {"Findings of the Association for Computational Linguistics: NAACL 2022"},
    year={"2022"},
}

License

This repository and its data is released under the MIT license.

For the licensing of all external datasets and databases used throughout our experiments:

Code for the preprint "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"

This is a repository for the paper of "Well-classified Examples are Underestimated in Classification with Deep Neural Networks" The implementation and

LancoPKU 25 Dec 11, 2022
DeepLabv3+๏ผšEncoder-Decoder with Atrous Separable Convolution่ฏญไน‰ๅˆ†ๅ‰ฒๆจกๅž‹ๅœจtensorflow2ๅฝ“ไธญ็š„ๅฎž็Žฐ

DeepLabv3+๏ผšEncoder-Decoder with Atrous Separable Convolution่ฏญไน‰ๅˆ†ๅ‰ฒๆจกๅž‹ๅœจtensorflow2ๅฝ“ไธญ็š„ๅฎž็Žฐ ็›ฎๅฝ• ๆ€ง่ƒฝๆƒ…ๅ†ต Performance ๆ‰€้œ€็Žฏๅขƒ Environment ๆณจๆ„ไบ‹้กน Attention ๆ–‡ไปถไธ‹่ฝฝ Download

Bubbliiiing 31 Nov 25, 2022
Dataset and codebase for NeurIPS 2021 paper: Exploring Forensic Dental Identification with Deep Learning

Repository under construction. Example dataset, checkpoints, and training/testing scripts will be avaible soon! ๐Ÿ’ก Collated best practices from most p

4 Jun 26, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
Real-Time High-Resolution Background Matting

Real-Time High-Resolution Background Matting Official repository for the paper Real-Time High-Resolution Background Matting. Our model requires captur

Peter Lin 6.1k Jan 03, 2023
links and status of cool gradio demos

awesome-demos This is a list of some wonderful demos & applications built with Gradio. Here's how to contribute yours! ๐Ÿ–Š๏ธ Natural language processing

Gradio 96 Dec 30, 2022
NeRF visualization library under construction

NeRF visualization library using PlenOctrees, under construction pip install nerfvis Docs will be at: https://nerfvis.readthedocs.org import nerfvis s

Alex Yu 196 Jan 04, 2023
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images This repository contains the implementation of our paper MetaAvatar: Learni

sfwang 96 Dec 13, 2022
Contour-guided image completion with perceptual grouping (BMVC 2021 publication)

Contour-guided Image Completion with Perceptual Grouping Authors Morteza Rezanejad*, Sidharth Gupta*, Chandra Gummaluru, Ryan Marten, John Wilder, Mic

Sid Gupta 6 Dec 27, 2022
The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution.

WSRGlow The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution. Audio sa

Kexun Zhang 96 Jan 03, 2023
Database Reasoning Over Text project for ACL paper

Database Reasoning over Text This repository contains the code for the Database Reasoning Over Text paper, to appear at ACL2021. Work is performed in

Facebook Research 320 Dec 12, 2022
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association" in PyTorch.

openpifpaf Continuously tested on Linux, MacOS and Windows: New 2021 paper: OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Te

VITA lab at EPFL 50 Dec 29, 2022
A Python 3 package for state-of-the-art statistical dimension reduction methods

direpack: a Python 3 library for state-of-the-art statistical dimension reduction techniques This package delivers a scikit-learn compatible Python 3

Sven Serneels 32 Dec 14, 2022
Integrated Semantic and Phonetic Post-correction for Chinese Speech Recognition

Integrated Semantic and Phonetic Post-correction for Chinese Speech Recognition | paper | dataset | pretrained detection model | Authors: Yi-Chang Che

Yi-Chang Chen 1 Aug 23, 2022
Mask-invariant Face Recognition through Template-level Knowledge Distillation

Mask-invariant Face Recognition through Template-level Knowledge Distillation This is the official repository of "Mask-invariant Face Recognition thro

Fadi Boutros 35 Dec 06, 2022
The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track.

ISC21-Descriptor-Track-1st The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track. You can check our solution

lyakaap 75 Jan 08, 2023
A framework for analyzing computer vision models with simulated data

3DB: A framework for analyzing computer vision models with simulated data Paper Quickstart guide Blog post Installation Follow instructions on: https:

3DB 112 Jan 01, 2023
A PyTorch Toolbox for Face Recognition

FaceX-Zoo FaceX-Zoo is a PyTorch toolbox for face recognition. It provides a training module with various supervisory heads and backbones towards stat

JDAI-CV 1.6k Jan 06, 2023
Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

M3D-VTON: A Monocular-to-3D Virtual Try-On Network Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network" Paper | Suppl

109 Dec 29, 2022
่ฟ™ๆ˜ฏไธ€ไธชdeeplabv3-plus-pytorch็š„ๆบ็ ๏ผŒๅฏไปฅ็”จไบŽ่ฎญ็ปƒ่‡ชๅทฑ็š„ๆจกๅž‹ใ€‚

DeepLabv3+๏ผšEncoder-Decoder with Atrous Separable Convolution่ฏญไน‰ๅˆ†ๅ‰ฒๆจกๅž‹ๅœจPytorchๅฝ“ไธญ็š„ๅฎž็Žฐ ็›ฎๅฝ• ๆ€ง่ƒฝๆƒ…ๅ†ต Performance ๆ‰€้œ€็Žฏๅขƒ Environment ๆณจๆ„ไบ‹้กน Attention ๆ–‡ไปถไธ‹่ฝฝ Download ่ฎญ็ปƒๆญฅ้ชค

Bubbliiiing 350 Dec 28, 2022