This repository lets you interact with Lean through a REPL.

Related tags

Deep Learninglean-gym
Overview

lean-gym

This repository lets you interact with Lean through a REPL. See Formal Mathematics Statement Curriculum Learning for a presentation of lean-gym.

Setup

# Download pre-built binaries and build the project (targeting mathlib).
bash ./scripts/setup.sh

Usage

lean --run src/repl.lean

Starts a fresh REPL. Once started, the REPL accepts the following commands:

  • init_search: takes a declaration name as well as a list of open namespaces to initialize a search at the given declaration opening the provided namespaces, and returning the initial tactic state (along with a fresh search_id and tactic_state_id).
  • run_tac: takes a search_id, a tactic_state_id and a tactic to apply at the tactic state denoted by the provided ids.
  • clear_search: takes a search_id to clear all state related to a search.

The commands can be interleaved freely enabling the parallelization of multiple proof searches against the same REPL.

$ lean --run src/repl.lean

["init_search", ["intermediate_field.adjoin.range_algebra_map_subset", "intermediate_field finite_dimensional polynomial"]]
{"error":null,"search_id":"0","tactic_state":"⊢ ∀ (F : Type u_1) [_inst_1 : field F] {E : Type u_2} [_inst_2 : field E] [_inst_3 : algebra F E] (S : set E),\tset.range ⇑(algebra_map F E) ⊆ ↑(intermediate_field.adjoin F S)","tactic_state_id":"0"}

["init_search", ["int.prime.dvd_mul", ""]]
{"error":null,"search_id":"1","tactic_state":"⊢ ∀ {m n : ℤ} {p : ℕ}, nat.prime p → ↑p ∣ m * n → p ∣ m.nat_abs ∨ p ∣ n.nat_abs","tactic_state_id":"0"}

["run_tac",["0","0","intros"]]
{"error":null,"search_id":"0","tactic_state":"F : Type u_1,\t_inst_1 : field F,\tE : Type u_2,\t_inst_2 : field E,\t_inst_3 : algebra F E,\tS : set E\t⊢ set.range ⇑(algebra_map F E) ⊆ ↑(intermediate_field.adjoin F S)","tactic_state_id":"1"}

["run_tac",["1","0","intros"]]
{"error":null,"search_id":"1","tactic_state":"m n : ℤ,\tp : ℕ,\thp : nat.prime p,\th : ↑p ∣ m * n\t⊢ p ∣ m.nat_abs ∨ p ∣ n.nat_abs","tactic_state_id":"1"}

["run_tac",["1","1","apply (nat.prime.dvd_mul hp).mp"]]
{"error":null,"search_id":"1","tactic_state":"m n : ℤ,\tp : ℕ,\thp : nat.prime p,\th : ↑p ∣ m * n\t⊢ p ∣ m.nat_abs * n.nat_abs","tactic_state_id":"2"}

["run_tac",["1","2","rw ← int.nat_abs_mul"]]
{"error":null,"search_id":"1","tactic_state":"m n : ℤ,\tp : ℕ,\thp : nat.prime p,\th : ↑p ∣ m * n\t⊢ p ∣ (m * n).nat_abs","tactic_state_id":"3"}

["run_tac",["1","3","simp"]]
{"error":"run_tac_failed: pos=(some ⟨1, 2⟩) msg=simplify tactic failed to simplify","search_id":null,"tactic_state":null,"tactic_state_id":null}

["run_tac",["1","5","exact int.coe_nat_dvd_left.mp h"]]
{"error":"unknown_id: search_id=1 tactic_state_id=5","search_id":null,"tactic_state":null,"tactic_state_id":null}

["run_tac",["1","3","exact int.coe_nat_dvd_left.mp h"]]
{"error":null,"search_id":"1","tactic_state":"no goals","tactic_state_id":"4"}

["clear_search",["1"]]
{"error":null,"search_id":"1","tactic_state":null,"tactic_state_id":null}

["run_tac",["0","1","intros x hx,"]]
{"error":null,"search_id":"0","tactic_state":"F : Type u_1,\t_inst_1 : field F,\tE : Type u_2,\t_inst_2 : field E,\t_inst_3 : algebra F E,\tS : set E,\tx : E,\thx : x ∈ set.range ⇑(algebra_map F E)\t⊢ x ∈ ↑(intermediate_field.adjoin F S)","tactic_state_id":"2"}

["run_tac",["0","2","cases hx with f hf"]]
{"error":null,"search_id":"0","tactic_state":"F : Type u_1,\t_inst_1 : field F,\tE : Type u_2,\t_inst_2 : field E,\t_inst_3 : algebra F E,\tS : set E,\tx : E,\tf : F,\thf : ⇑(algebra_map F E) f = x\t⊢ x ∈ ↑(intermediate_field.adjoin F S)","tactic_state_id":"3"}

["run_tac",["0","3","rw ← hf"]]
{"error":null,"search_id":"0","tactic_state":"F : Type u_1,\t_inst_1 : field F,\tE : Type u_2,\t_inst_2 : field E,\t_inst_3 : algebra F E,\tS : set E,\tx : E,\tf : F,\thf : ⇑(algebra_map F E) f = x\t⊢ ⇑(algebra_map F E) f ∈ ↑(intermediate_field.adjoin F S)","tactic_state_id":"4"}

["run_tac",["0","4","exact adjoin.algebra_map_mem F S f"]]
{"error":null,"search_id":"0","tactic_state":"no goals","tactic_state_id":"5"}

["clear_search",["0"]]
{"error":null,"search_id":"0","tactic_state":null,"tactic_state_id":null}

Declaration names

Declaration names and open namespaces as recorded by lean_proof_recording are available in the data/ directory to be used with the init_search command.

Notes

The REPL is subject to crashes in rare cases. Empirically such crash happens no more than ~0.01% of the time.

When a tactic state is reached with no left goals, some custom logic is run to check that the resulting proof's type matches the top level goal type and does not rely on sorry. We also check for the presence of undefined in the proof term. As an example, the following MiniF2F proofs will safely fail with error proof_validation_failed.

["init_search", ["mathd_algebra_35", ""]]
["run_tac", ["0", "0", "intros"]]
["run_tac", ["0", "1", "sorry"]]
["init_search", ["induction_divisibility_3divnto3m2n", ""]]
["run_tac", ["0", "0", "intros"]]
["run_tac", ["0", "1", "rw [add_comm]"]]
["run_tac", ["0", "2", "have h3 : 1 * (n + 1) ≤ (n + 1)"]]
["run_tac", ["0", "3", "rw one_mul"]]
["run_tac", ["0", "4", "apply dvd_trans"]]
["run_tac", ["0", "5", "swap"]]
["run_tac", ["0", "6", "simp []"]]
["init_search", ["mathd_numbertheory_13", ""]]
["run_tac", ["0", "0", "intros u v hu hv hsum"]]
["run_tac", ["0", "1", "intro h"]]
["run_tac", ["0", "2", "contrapose h"]]
["run_tac", ["0", "3", "intro hn"]]
["run_tac", ["0", "4", "exact not_lt_of_lt hn undefined"]]
Owner
OpenAI
OpenAI
Multivariate Time Series Transformer, public version

Multivariate Time Series Transformer Framework This code corresponds to the paper: George Zerveas et al. A Transformer-based Framework for Multivariat

363 Jan 03, 2023
GNPy: Optical Route Planning and DWDM Network Optimization

GNPy is an open-source, community-developed library for building route planning and optimization tools in real-world mesh optical networks

Telecom Infra Project 140 Dec 19, 2022
In this project, two programs can help you take full agvantage of time on the model training with a remote server

In this project, two programs can help you take full agvantage of time on the model training with a remote server, which can push notification to your phone about the information during model trainin

GrayLee 8 Dec 27, 2022
Fang Zhonghao 13 Nov 19, 2022
End-To-End Optimization of LiDAR Beam Configuration

End-To-End Optimization of LiDAR Beam Configuration arXiv | IEEE Xplore This repository is the official implementation of the paper: End-To-End Optimi

Niclas 30 Nov 28, 2022
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

11.4k Jan 09, 2023
A simple python stock Predictor

Python Stock Predictor A simple python stock Predictor Demo Run Locally Clone the project git clone https://github.com/yashraj-n/stock-price-predict

Yashraj narke 5 Nov 29, 2021
Orthogonal Over-Parameterized Training

The inductive bias of a neural network is largely determined by the architecture and the training algorithm. To achieve good generalization, how to effectively train a neural network is of great impo

Weiyang Liu 11 Apr 18, 2022
SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals

SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals Abstract Sleep apnea (SA) is a common slee

9 Dec 21, 2022
Official page of Struct-MDC (RA-L'22 with IROS'22 option); Depth completion from Visual-SLAM using point & line features

Struct-MDC (click the above buttons for redirection!) Official page of "Struct-MDC: Mesh-Refined Unsupervised Depth Completion Leveraging Structural R

Urban Robotics Lab. @ KAIST 37 Dec 22, 2022
Deep Distributed Control of Port-Hamiltonian Systems

De(e)pendable Distributed Control of Port-Hamiltonian Systems (DeepDisCoPH) This repository is associated to the paper [1] and it contains: The full p

Dependable Control and Decision group - EPFL 3 Aug 17, 2022
10th place solution for Google Smartphone Decimeter Challenge at kaggle.

Under refactoring 10th place solution for Google Smartphone Decimeter Challenge at kaggle. Google Smartphone Decimeter Challenge Global Navigation Sat

12 Oct 25, 2022
the official code for ICRA 2021 Paper: "Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation"

G2S This is the official code for ICRA 2021 Paper: Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation by Hemang

NeurAI 4 Jul 27, 2022
Official re-implementation of the Calibrated Adversarial Refinement model described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Official re-implementation of the Calibrated Adversarial Refinement model described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Elias Kassapis 31 Nov 22, 2022
Web service for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation based on OpenFace 2.0

OpenGaze: Web Service for OpenFace Facial Behaviour Analysis Toolkit Overview OpenFace is a fantastic tool intended for computer vision and machine le

Sayom Shakib 4 Nov 03, 2022
PiRank: Learning to Rank via Differentiable Sorting

PiRank: Learning to Rank via Differentiable Sorting This repository provides a reference implementation for learning PiRank-based models as described

54 Dec 17, 2022
Code base for "On-the-Fly Test-time Adaptation for Medical Image Segmentation"

On-the-Fly Adaptation Official Pytorch Code base for On-the-Fly Test-time Adaptation for Medical Image Segmentation Paper Introduction One major probl

Jeya Maria Jose 17 Nov 10, 2022
LightningFSL: Pytorch-Lightning implementations of Few-Shot Learning models.

LightningFSL: Few-Shot Learning with Pytorch-Lightning In this repo, a number of pytorch-lightning implementations of FSL algorithms are provided, inc

Xu Luo 76 Dec 11, 2022
PyTorch implementation of PSPNet segmentation network

pspnet-pytorch PyTorch implementation of PSPNet segmentation network Original paper Pyramid Scene Parsing Network Details This is a slightly different

Roman Trusov 532 Dec 29, 2022
Tensorflow-seq2seq-tutorials - Dynamic seq2seq in TensorFlow, step by step

seq2seq with TensorFlow Collection of unfinished tutorials. May be good for educational purposes. 1 - simple sequence-to-sequence model with dynamic u

Matvey Ezhov 1k Dec 17, 2022