This repository lets you interact with Lean through a REPL.

Related tags

Deep Learninglean-gym
Overview

lean-gym

This repository lets you interact with Lean through a REPL. See Formal Mathematics Statement Curriculum Learning for a presentation of lean-gym.

Setup

# Download pre-built binaries and build the project (targeting mathlib).
bash ./scripts/setup.sh

Usage

lean --run src/repl.lean

Starts a fresh REPL. Once started, the REPL accepts the following commands:

  • init_search: takes a declaration name as well as a list of open namespaces to initialize a search at the given declaration opening the provided namespaces, and returning the initial tactic state (along with a fresh search_id and tactic_state_id).
  • run_tac: takes a search_id, a tactic_state_id and a tactic to apply at the tactic state denoted by the provided ids.
  • clear_search: takes a search_id to clear all state related to a search.

The commands can be interleaved freely enabling the parallelization of multiple proof searches against the same REPL.

$ lean --run src/repl.lean

["init_search", ["intermediate_field.adjoin.range_algebra_map_subset", "intermediate_field finite_dimensional polynomial"]]
{"error":null,"search_id":"0","tactic_state":"⊢ ∀ (F : Type u_1) [_inst_1 : field F] {E : Type u_2} [_inst_2 : field E] [_inst_3 : algebra F E] (S : set E),\tset.range ⇑(algebra_map F E) ⊆ ↑(intermediate_field.adjoin F S)","tactic_state_id":"0"}

["init_search", ["int.prime.dvd_mul", ""]]
{"error":null,"search_id":"1","tactic_state":"⊢ ∀ {m n : ℤ} {p : ℕ}, nat.prime p → ↑p ∣ m * n → p ∣ m.nat_abs ∨ p ∣ n.nat_abs","tactic_state_id":"0"}

["run_tac",["0","0","intros"]]
{"error":null,"search_id":"0","tactic_state":"F : Type u_1,\t_inst_1 : field F,\tE : Type u_2,\t_inst_2 : field E,\t_inst_3 : algebra F E,\tS : set E\t⊢ set.range ⇑(algebra_map F E) ⊆ ↑(intermediate_field.adjoin F S)","tactic_state_id":"1"}

["run_tac",["1","0","intros"]]
{"error":null,"search_id":"1","tactic_state":"m n : ℤ,\tp : ℕ,\thp : nat.prime p,\th : ↑p ∣ m * n\t⊢ p ∣ m.nat_abs ∨ p ∣ n.nat_abs","tactic_state_id":"1"}

["run_tac",["1","1","apply (nat.prime.dvd_mul hp).mp"]]
{"error":null,"search_id":"1","tactic_state":"m n : ℤ,\tp : ℕ,\thp : nat.prime p,\th : ↑p ∣ m * n\t⊢ p ∣ m.nat_abs * n.nat_abs","tactic_state_id":"2"}

["run_tac",["1","2","rw ← int.nat_abs_mul"]]
{"error":null,"search_id":"1","tactic_state":"m n : ℤ,\tp : ℕ,\thp : nat.prime p,\th : ↑p ∣ m * n\t⊢ p ∣ (m * n).nat_abs","tactic_state_id":"3"}

["run_tac",["1","3","simp"]]
{"error":"run_tac_failed: pos=(some ⟨1, 2⟩) msg=simplify tactic failed to simplify","search_id":null,"tactic_state":null,"tactic_state_id":null}

["run_tac",["1","5","exact int.coe_nat_dvd_left.mp h"]]
{"error":"unknown_id: search_id=1 tactic_state_id=5","search_id":null,"tactic_state":null,"tactic_state_id":null}

["run_tac",["1","3","exact int.coe_nat_dvd_left.mp h"]]
{"error":null,"search_id":"1","tactic_state":"no goals","tactic_state_id":"4"}

["clear_search",["1"]]
{"error":null,"search_id":"1","tactic_state":null,"tactic_state_id":null}

["run_tac",["0","1","intros x hx,"]]
{"error":null,"search_id":"0","tactic_state":"F : Type u_1,\t_inst_1 : field F,\tE : Type u_2,\t_inst_2 : field E,\t_inst_3 : algebra F E,\tS : set E,\tx : E,\thx : x ∈ set.range ⇑(algebra_map F E)\t⊢ x ∈ ↑(intermediate_field.adjoin F S)","tactic_state_id":"2"}

["run_tac",["0","2","cases hx with f hf"]]
{"error":null,"search_id":"0","tactic_state":"F : Type u_1,\t_inst_1 : field F,\tE : Type u_2,\t_inst_2 : field E,\t_inst_3 : algebra F E,\tS : set E,\tx : E,\tf : F,\thf : ⇑(algebra_map F E) f = x\t⊢ x ∈ ↑(intermediate_field.adjoin F S)","tactic_state_id":"3"}

["run_tac",["0","3","rw ← hf"]]
{"error":null,"search_id":"0","tactic_state":"F : Type u_1,\t_inst_1 : field F,\tE : Type u_2,\t_inst_2 : field E,\t_inst_3 : algebra F E,\tS : set E,\tx : E,\tf : F,\thf : ⇑(algebra_map F E) f = x\t⊢ ⇑(algebra_map F E) f ∈ ↑(intermediate_field.adjoin F S)","tactic_state_id":"4"}

["run_tac",["0","4","exact adjoin.algebra_map_mem F S f"]]
{"error":null,"search_id":"0","tactic_state":"no goals","tactic_state_id":"5"}

["clear_search",["0"]]
{"error":null,"search_id":"0","tactic_state":null,"tactic_state_id":null}

Declaration names

Declaration names and open namespaces as recorded by lean_proof_recording are available in the data/ directory to be used with the init_search command.

Notes

The REPL is subject to crashes in rare cases. Empirically such crash happens no more than ~0.01% of the time.

When a tactic state is reached with no left goals, some custom logic is run to check that the resulting proof's type matches the top level goal type and does not rely on sorry. We also check for the presence of undefined in the proof term. As an example, the following MiniF2F proofs will safely fail with error proof_validation_failed.

["init_search", ["mathd_algebra_35", ""]]
["run_tac", ["0", "0", "intros"]]
["run_tac", ["0", "1", "sorry"]]
["init_search", ["induction_divisibility_3divnto3m2n", ""]]
["run_tac", ["0", "0", "intros"]]
["run_tac", ["0", "1", "rw [add_comm]"]]
["run_tac", ["0", "2", "have h3 : 1 * (n + 1) ≤ (n + 1)"]]
["run_tac", ["0", "3", "rw one_mul"]]
["run_tac", ["0", "4", "apply dvd_trans"]]
["run_tac", ["0", "5", "swap"]]
["run_tac", ["0", "6", "simp []"]]
["init_search", ["mathd_numbertheory_13", ""]]
["run_tac", ["0", "0", "intros u v hu hv hsum"]]
["run_tac", ["0", "1", "intro h"]]
["run_tac", ["0", "2", "contrapose h"]]
["run_tac", ["0", "3", "intro hn"]]
["run_tac", ["0", "4", "exact not_lt_of_lt hn undefined"]]
Owner
OpenAI
OpenAI
Official code repository for the EMNLP 2021 paper

Integrating Visuospatial, Linguistic and Commonsense Structure into Story Visualization PyTorch code for the EMNLP 2021 paper "Integrating Visuospatia

Adyasha Maharana 23 Dec 19, 2022
[CVPR2021] DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datasets

DoDNet This repo holds the pytorch implementation of DoDNet: DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datase

116 Dec 12, 2022
Pytorch implementation of the paper "Optimization as a Model for Few-Shot Learning"

Optimization as a Model for Few-Shot Learning This repo provides a Pytorch implementation for the Optimization as a Model for Few-Shot Learning paper.

Albert Berenguel Centeno 238 Jan 04, 2023
这是一个mobilenet-yolov4-lite的库,把yolov4主干网络修改成了mobilenet,修改了Panet的卷积组成,使参数量大幅度缩小。

YOLOV4:You Only Look Once目标检测模型-修改mobilenet系列主干网络-在Keras当中的实现 2021年2月8日更新: 加入letterbox_image的选项,关闭letterbox_image后网络的map一般可以得到提升。

Bubbliiiing 65 Dec 01, 2022
The FIRST GANs-based omics-to-omics translation framework

OmiTrans Please also have a look at our multi-omics multi-task DL freamwork 👀 : OmiEmbed The FIRST GANs-based omics-to-omics translation framework Xi

Xiaoyu Zhang 6 Dec 14, 2022
N-Omniglot is a large neuromorphic few-shot learning dataset

N-Omniglot [Paper] || [Dataset] N-Omniglot is a large neuromorphic few-shot learning dataset. It reconstructs strokes of Omniglot as videos and uses D

11 Dec 05, 2022
《Single Image Reflection Removal Beyond Linearity》(CVPR 2019)

Single-Image-Reflection-Removal-Beyond-Linearity Paper Single Image Reflection Removal Beyond Linearity. Qiang Wen, Yinjie Tan, Jing Qin, Wenxi Liu, G

Qiang Wen 51 Jun 24, 2022
Human Pose Detection on EdgeTPU

Coral PoseNet Pose estimation refers to computer vision techniques that detect human figures in images and video, so that one could determine, for exa

google-coral 476 Dec 31, 2022
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Yijia Weng 96 Dec 07, 2022
CVAT is free, online, interactive video and image annotation tool for computer vision

Computer Vision Annotation Tool (CVAT) CVAT is free, online, interactive video and image annotation tool for computer vision. It is being used by our

OpenVINO Toolkit 8.6k Jan 04, 2023
A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data

A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data Overview Clustering analysis is widely utilized in single-cell RNA-seque

AI-Biomed @NSCC-gz 3 May 08, 2022
simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

Ramón Casero 1 Jan 07, 2022
Wider or Deeper: Revisiting the ResNet Model for Visual Recognition

ademxapp Visual applications by the University of Adelaide In designing our Model A, we did not over-optimize its structure for efficiency unless it w

Zifeng Wu 338 Dec 12, 2022
PyTorch Code for the paper "VSE++: Improving Visual-Semantic Embeddings with Hard Negatives"

Improving Visual-Semantic Embeddings with Hard Negatives Code for the image-caption retrieval methods from VSE++: Improving Visual-Semantic Embeddings

Fartash Faghri 441 Dec 05, 2022
PyTorch implementation of Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy

Anomaly Transformer in PyTorch This is an implementation of Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy. This pape

spencerbraun 160 Dec 19, 2022
【steal piano】GitHub偷情分析工具!

【steal piano】GitHub偷情分析工具! 你是否有这样的困扰,有一天你的仓库被很多人加了star,但是你却不知道这些人都是从哪来的? 别担心,GitHub偷情分析工具帮你轻松解决问题! 原理 GitHub偷情分析工具透过分析star的时间以及他们之间的follow关系,可以推测出每个st

黄巍 442 Dec 21, 2022
TransferNet: Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network

TransferNet: Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network Created by Seunghoon Hong, Junhyuk Oh,

42 Jun 29, 2022
MohammadReza Sharifi 27 Dec 13, 2022
you can add any codes in any language by creating its respective folder (if already not available).

HACKTOBERFEST-2021-WEB-DEV Beginner-Hacktoberfest Need Your first pr for hacktoberfest 2k21 ? come on in About This is repository of Responsive Portfo

Suman Sharma 8 Oct 17, 2022
Probabilistic Cross-Modal Embedding (PCME) CVPR 2021

Probabilistic Cross-Modal Embedding (PCME) CVPR 2021 Official Pytorch implementation of PCME | Paper Sanghyuk Chun1 Seong Joon Oh1 Rafael Sampaio de R

NAVER AI 87 Dec 21, 2022