NaturalProofs: Mathematical Theorem Proving in Natural Language

Overview

NaturalProofs: Mathematical Theorem Proving in Natural Language

NaturalProofs: Mathematical Theorem Proving in Natural Language
Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, Kyunghyun Cho

This repo contains:

  • The NaturalProofs Dataset and the mathematical reference retrieval task data.
  • Preprocessing NaturalProofs and the retrieval task data.
  • Training and evaluation for mathematical reference retrieval.
  • Pretrained models for mathematical reference retrieval.

Please cite our work if you found the resources in this repository useful:

@article{welleck2021naturalproofs,
  title={NaturalProofs: Mathematical Theorem Proving in Natural Language},
  author={Welleck, Sean and Liu, Jiacheng and Le Bras, Ronan and Hajishirzi, Hannaneh and Choi, Yejin and Cho, Kyunghyun},
  year={2021}
}
Section Subsection
NaturalProofs Dataset Dataset
Preprocessing
Mathematical Reference Retrieval Dataset
Setup
Preprocessing
Pretrained models
Training
Evaluation

NaturalProofs Dataset

We provide the preprocessed NaturalProofs Dataset (JSON):

NaturalProofs Dataset
dataset.json [zenodo]

Preprocessing

To see the steps used to create the NaturalProofs dataset.json from raw ProofWiki data:

  1. Download the ProofWiki XML.
  2. Preprocess the data using notebooks/parse_proofwiki.ipynb.
  3. Form the data splits using notebooks/dataset_splits.ipynb.

Mathematical Reference Retrieval

Dataset

The Mathematical Reference Retrieval dataset contains (x, r, y) examples with theorem statements x, positive and negative references r, and 0/1 labels y, derived from NaturalProofs.

We provide the version used in the paper (bert-based-cased tokenizer, 200 randomly sampled negatives):

Reference Retrieval Dataset
bert-base-cased 200 negatives

Pretrained Models

Pretrained models
bert-base-cased
lstm

These models were trained with the "bert-base-cased 200 negatives" dataset provided above.

Setup

python setup.py develop

You can see the DockerFile for additional version info, etc.

Generating and tokenizing

To create your own version of the retrieval dataset, use python utils.py.

This step is not needed if you are using the reference retrieval dataset provided above.

Example:

python utils.py --filepath /path/to/dataset.json --output-path /path/to/out/ --model-type bert-base-cased
# => Writing dataset to /path/to/out/dataset_tokenized__bert-base-cased_200.pkl

Evaluation

Using the retrieval dataset and a model provided above, we compute the test evaluation metrics in the paper:

  1. Predict the rankings:
python naturalproofs/predict.py \
--method bert-base-cased \      # | lstm
--model-type bert-base-cased \  # | lstm
--datapath /path/to/dataset_tokenized__bert-base-cased_200.pkl \
--datapath-base /path/to/dataset.json \
--checkpoint-path /path/to/best.ckpt \
--output-dir /path/to/out/ \
--split test  # use valid during model development
  1. Compute metrics over the rankings:
python naturalproofs/analyze.py \
--method bert-base-cased \      # | lstm
--eval-path /path/to/out/eval.pkl \
--analysis-path /path/to/out/analysis.pkl

Training

python naturalproofs/model.py \
--datapath /path/to/dataset_tokenized__bert-base-cased_200.pkl \
--default-root-dir /path/to/out/

Classical Retrieval Baselines

TF-IDF example:

python naturalproofs/baselines.py \
--method tfidf \
--datapath /path/to/dataset_tokenized__bert-base-cased_200.pkl \
--datapath-base /path/to/dataset.json \
--savedir /path/to/out/

Then use analyze.py as shown above to compute metrics.

Owner
Sean Welleck
Sean Welleck
Continual Learning of Electronic Health Records (EHR).

Continual Learning of Longitudinal Health Records Repo for reproducing the experiments in Continual Learning of Longitudinal Health Records (2021). Re

Jacob 7 Oct 21, 2022
VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning

This is a release of our VIMPAC paper to illustrate the implementations. The pretrained checkpoints and scripts will be soon open-sourced in HuggingFace transformers.

Hao Tan 74 Dec 03, 2022
[3DV 2020] PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction

PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction International Conference on 3D Vision, 2020 Sai Sagar Jinka1, Rohan

Rohan Chacko 39 Oct 12, 2022
A Broader Picture of Random-walk Based Graph Embedding

Random-walk Embedding Framework This repository is a reference implementation of the random-walk embedding framework as described in the paper: A Broa

Zexi Huang 23 Dec 13, 2022
Multi-label classification of retinal disorders

Multi-label classification of retinal disorders This is a deep learning course project. The goal is to develop a solution, using computer vision techn

Sundeep Bhimireddy 1 Jan 29, 2022
QAHOI: Query-Based Anchors for Human-Object Interaction Detection (paper)

QAHOI QAHOI: Query-Based Anchors for Human-Object Interaction Detection (paper) Requirements PyTorch = 1.5.1 torchvision = 0.6.1 pip install -r requ

38 Dec 29, 2022
PyTorch implementation of MSBG hearing loss model and MBSTOI intelligibility metric

PyTorch implementation of MSBG hearing loss model and MBSTOI intelligibility metric This repository contains the implementation of MSBG hearing loss m

BUT <a href=[email protected]"> 9 Nov 08, 2022
This repository is based on Ultralytics/yolov5, with adjustments to enable polygon prediction boxes.

Polygon-Yolov5 This repository is based on Ultralytics/yolov5, with adjustments to enable polygon prediction boxes. Section I. Description The codes a

xinzelee 226 Jan 05, 2023
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

14 Sep 13, 2022
Code for Blind Image Decomposition (BID) and Blind Image Decomposition network (BIDeN).

arXiv, porject page, paper Blind Image Decomposition (BID) Blind Image Decomposition is a novel task. The task requires separating a superimposed imag

64 Dec 20, 2022
Get the partition that a file belongs and the percentage of space that consumes

tinos_eisai_sy Get the partition that a file belongs and the percentage of space that consumes (works only with OSes that use the df command) tinos_ei

Konstantinos Patronas 6 Jan 24, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
Code for the Lovász-Softmax loss (CVPR 2018)

The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks Maxim Berman, Amal Ranne

Maxim Berman 1.3k Jan 04, 2023
Reproduces the results of the paper "Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations".

Finite basis physics-informed neural networks (FBPINNs) This repository reproduces the results of the paper Finite Basis Physics-Informed Neural Netwo

Ben Moseley 65 Dec 28, 2022
Learning Tracking Representations via Dual-Branch Fully Transformer Networks

Learning Tracking Representations via Dual-Branch Fully Transformer Networks DualTFR ⭐ We achieves the runner-ups for both VOT2021ST (short-term) and

phiphi 19 May 04, 2022
No Code AI/ML platform

NoCodeAIML No Code AI/ML platform - Community Edition Video credits: Uday Kiran Typical No Code AI/ML Platform will have features like drag and drop,

Bhagvan Kommadi 5 Jan 28, 2022
Cross-Document Coreference Resolution

Cross-Document Coreference Resolution This repository contains code and models for end-to-end cross-document coreference resolution, as decribed in ou

Arie Cattan 29 Nov 28, 2022
A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

張致強 14 Dec 02, 2022
QI-Q RoboMaster2022 CV Algorithm

QI-Q RoboMaster2022 CV Algorithm

2 Jan 10, 2022
AI grand challenge 2020 Repo (Speech Recognition Track)

KorBERT를 활용한 한국어 텍스트 기반 위협 상황인지(2020 인공지능 그랜드 챌린지) 본 프로젝트는 ETRI에서 제공된 한국어 korBERT 모델을 활용하여 폭력 기반 한국어 텍스트를 분류하는 다양한 분류 모델들을 제공합니다. 본 개발자들이 참여한 2020 인공지

Young-Seok Choi 23 Jan 25, 2022