NL. The natural language programming language.

Related tags

Text Data & NLPNL
Overview

NL

A Natural-Language programming language. Built using Codex.

A few examples are inside the nl_projects directory.

How it works

Write any code in pure english, and have it compiled and run as regular code would. The only rules are:

  • Must have the .nl file extension
  • Every command is separated by a line-break
  • Make sure to keep the number of stuff per-line to a minimum. Doing so will result in better compilation.
  • comments are put in-between parentheses

Example: Guessing game.

Compiling a guessing game program looks something like this:

First you write the code in NL:

(the following is a guessing game)
create a maximum number of 100

Repeat forever...
Store a number between 1 and the maximum number. Call it the answer.
Increase the maximum number by 20
Tell the user that you are thinking of a number between 0 and the maximum number. Tell the user that they only have 14 chances to get it right.
Repeat 14 times...
Ask the user for a guess, and Convert it to a number
if the guess is equal to the answer, congrad the user and end the loop.
otherwise if the guess is higher or lower than the answer, tell the user.
Tell the user how many chances are left.
when the loop has ended, if the user has not guessed the answer, tell the user game over and then exit the game

When you compile the code, it gets copied into a directory and auto-indented (This stage can be skipped if desired, but doing so will result in greater likelyhood of erroneous code.):

Create a maximum number of 100

Repeat forever...
    Store a number between 1 and the maximum number. Call it the answer.
    Increase the maximum number by 20.
    Tell the user that you are thinking of a number between 0 and the maximum number. Tell the user that they only have 14 chances to get it right.
    Repeat 14 times...
        Ask the user for a guess, and Convert it to a number
        If the guess is equal to the answer, congradulate the user and end the loop.
        Otherwise if the guess is higher or lower than the answer, tell the user.
        Tell the user how many chances are left.
    When loop has ended, If user has not guessed the answer, tell the user game over and then exit the game.

Which then gets compiled to python code and executed:

answer: print("Your guess is too high") else: print("Your guess is too low") # CMD: Tell the user how many chances are left. print("You have " + str(14 - i) + " chances left") # CMD: When loop has ended, If user has not guessed the answer, tell the user game over and then exit the game. if guess != answer: print("Game over") exit()">
import random

# CMD: Create a maximum number of 100
max = 100
# CMD: Repeat forever...
while True:
    # CMD: Store a number between 1 and the maximum number. Call it the answer.
    answer = random.randint(1,max)
    # CMD: Increase the maximum number by 20.
    max = max + 20
    # CMD: Tell the user that you are thinking of a number between 0 and the maximum number. Tell the user that they only have 14 chances to get it right.
    print("I am thinking of a number between 0 and " + str(max))
    # CMD: Repeat 14 times...
    for i in range(0,14):
        # CMD: Ask the user for a guess, and Convert it to a number
        guess = int(input("Guess a number:"))
        # CMD: If the guess is equal to the answer, congradulate the user and end the loop.
        if guess == answer:
            print("You guessed it!")
            break;
        # CMD: Otherwise if the guess is higher or lower than the answer, tell the user.
        elif guess > answer:
            print("Your guess is too high")
        else: 
            print("Your guess is too low")
        # CMD: Tell the user how many chances are left.
        print("You have " + str(14 - i) + " chances left")
    # CMD: When loop has ended, If user has not guessed the answer, tell the user game over and then exit the game.
    if guess != answer:
        print("Game over")
        exit()

API List of commands (in compilation.py)

  • compile(nl_code_path)
  • compileAndRun(nl_code_path)
  • run(python_code_path)
  • compileWithoutCorrection(nl_code_path) Compiles without creating an indentation file to compile.
An Open-Source Package for Neural Relation Extraction (NRE)

OpenNRE We have a DEMO website (http://opennre.thunlp.ai/). Try it out! OpenNRE is an open-source and extensible toolkit that provides a unified frame

THUNLP 3.9k Jan 03, 2023
Tool to check whether a GCP bucket is public or not.

Tool to check publicly accessible GCP bucket. Blog https://justm0rph3u5.medium.com/gcp-inspector-auditing-publicly-exposed-gcp-bucket-ac6cad55618c Wha

DIVYANSHU SHUKLA 7 Nov 24, 2022
Twitter Sentiment Analysis using #tag, words and username

Twitter Sentment Analysis Web App using #tag, words and username to fetch data finds Insides of data and Tells Sentiment of the perticular #tag, words or username.

Kumar Saksham 26 Dec 25, 2022
Open Source Neural Machine Translation in PyTorch

OpenNMT-py: Open-Source Neural Machine Translation OpenNMT-py is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine trans

OpenNMT 5.8k Jan 04, 2023
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)

Universal Adversarial Triggers for Attacking and Analyzing NLP This is the official code for the EMNLP 2019 paper, Universal Adversarial Triggers for

Eric Wallace 248 Dec 17, 2022
multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

hellonlp 30 Dec 12, 2022
The SVO-Probes Dataset for Verb Understanding

The SVO-Probes Dataset for Verb Understanding This repository contains the SVO-Probes benchmark designed to probe for Subject, Verb, and Object unders

DeepMind 20 Nov 30, 2022
Mirco Ravanelli 2.3k Dec 27, 2022
A Persian Image Captioning model based on Vision Encoder Decoder Models of the transformers🤗.

Persian-Image-Captioning We fine-tuning the Vision Encoder Decoder Model for the task of image captioning on the coco-flickr-farsi dataset. The implem

Hamtech-ai 15 Aug 25, 2022
Auto_code_complete is a auto word-completetion program which allows you to customize it on your needs

auto_code_complete is a auto word-completetion program which allows you to customize it on your needs. the model for this program is one of the deep-learning NLP(Natural Language Process) model struc

RUO 2 Feb 22, 2022
SimCSE: Simple Contrastive Learning of Sentence Embeddings

SimCSE: Simple Contrastive Learning of Sentence Embeddings This repository contains the code and pre-trained models for our paper SimCSE: Simple Contr

Princeton Natural Language Processing 2.5k Jan 07, 2023
Which Apple Keeps Which Doctor Away? Colorful Word Representations with Visual Oracles

Which Apple Keeps Which Doctor Away? Colorful Word Representations with Visual Oracles (TASLP 2022)

Zhuosheng Zhang 3 Apr 14, 2022
Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classifi

186 Dec 24, 2022
Codes to pre-train Japanese T5 models

t5-japanese Codes to pre-train a T5 (Text-to-Text Transfer Transformer) model pre-trained on Japanese web texts. The model is available at https://hug

Megagon Labs 37 Dec 25, 2022
Shared code for training sentence embeddings with Flax / JAX

flax-sentence-embeddings This repository will be used to share code for the Flax / JAX community event to train sentence embeddings on 1B+ training pa

Nils Reimers 23 Dec 30, 2022
Simple telegram bot to convert files into direct download link.you can use telegram as a file server 🪁

TGCLOUD 🪁 Simple telegram bot to convert files into direct download link.you can use telegram as a file server 🪁 Features Easy to Deploy Heroku Supp

Mr.Acid dev 6 Oct 18, 2022
Pipelines de datos, 2021.

Este repo ilustra un proceso sencillo de automatización de transformación y modelado de datos, a través de un pipeline utilizando Luigi. Stack princip

Rodolfo Ferro 8 May 19, 2022
Python library for parsing resumes using natural language processing and machine learning

CVParser Python library for parsing resumes using natural language processing and machine learning. Setup Installation on Linux and Mac OS Follow the

nafiu 0 Jul 29, 2021