📔️ Generate a text-based journal from a template file.

Overview

JGen 📔️

Generate a text-based journal from a template file.

Contents

Getting Started

  1. Clone this repository -
  • git clone https://github.com/harrison-broadbent/JGen.git
  1. Edit "template.txt", copy and paste an example from /templates, or use the placeholder template -
  • vim template.txt
  1. Run JGen and follow the prompts -
  • python3 JGen.py
  1. Inspect "journal.txt" -
  • vim journal.txt

Example

Given the following template (available as templates/template_weekly.txt) -

_____________________________
Week: WEEKNUM, Year: YY
DD_NAME, DD MM_NAME - +++++++
DD_NAME, DD MM_NAME

Todos: - - -

Plans: - - -

and running JGen for two entries gives us -

_____________________________
Week: 10, Year: 2021
Saturday, 13 March -
Saturday, 20 March

Todos:
	-
	-
	-

Plans:
	-
	-
	-


_____________________________
Week: 11, Year: 2021
Saturday, 20 March -
Saturday, 27 March

Todos:
	-
	-
	-

Plans:
	-
	-
	-

Lets break down what happened -

  1. JGen sets it's internal date - "today's" date, from your perspective.
  2. JGen runs through line 1 and line 2 of template.txt, replacing keywords with their corresponding information and then writing the output to journal.txt.
  3. At the end of line 2 there are seven + (plus) symbols
    • JGen removes these from the output, and increments the internal date counter by 7 days.
  4. JGen fills out line 3 with the new date information, then fills out the rest of the information for the first entry.
  5. It then repeats this for the second entry, carrying over the date from the end of the first entry.
  6. JGen halts, with journal.txt containing our final output.

Overview

JGen parses a given template file to generate a journal file.

JGen runs through the template file and replaces keywords with their actual values (dates - day/month/year etc.), for a specified number of entries.

Usage

The JGen Python script contains all the code for the parser. To get started:

  • Download the JGen script.

  • Create a template.txt file (or download and rename one of the examples in /templates), and place it in the same directory as the JGen Python script.

    • See Details below for more information on creating a template.

    • See an Example to walk through a specific example of a template file.

  • Run the JGen Python script, and input the number of times the template should be reproduced.

    • Ex: 365 entries for a daily journal spanning a year, 52 entries for a weekly journal
  • journal.txt will be populated with text based on the template and the number of entries specified.

Details

See the Example section below if you want to jump straight into seeing how JGen works, by walking though an example.

JGen parses the template file, replacing any of the reserved keywords, shown below, with their corresponding date values.

Part of the templating process is to indicate using a (+) symbol when to increment the internal date counter, which JGen picks up as it parses the file. It also strips all (+) symbols from the file.

Reserved Keywords

  • DD

    • The date number.
    • 01, 05, 10, 21 etc.
  • MM

    • The month number.
    • 01, 10, 12 etc.
  • YY

    • The year.
    • 2020, 2021 etc.
  • DD_NAME

    • The name of the day.
    • Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday
  • MM_NAME

    • January, February etc.
  • DAYNUM

    • Day number of the year.
    • 123, 340 etc.
  • WEEKNUM

    • Week number of the year.
    • 13, 51 etc.
  • +

    • used to increment the internal date counter

    • will only increment after the entire line has been parsed

      • for example, parsing
      DD/MM/YY+ - DD/MM/YY
      

      would give

      21/02/2050 - 21/02/2050
      

      and not

      21/02/2050 - 28/02/2050
      

Gotchas

  • + can only be used to increment the date.

    • All + symbols are removed from the output.
    • ie. journal.txt file will never contain a + character
  • As mentioned in the "reserved keywords" section of this readme, the + characters are only interpreted at the end of a line.

    • Currently, to work around this, just place the second date on a new line (like in templates/template_weekly.txt)

    • For example, parsing

      DD/MM/YY+ - DD/MM/YY
      

      would give

      21/02/2050 - 21/02/2050
      

      and not

      21/02/2050 - 28/02/2050
      
You might also like...
Count the frequency of letters or words in a text file and show a graph.

Word Counter By EBUS Coding Club Count the frequency of letters or words in a text file and show a graph. Requirements Python 3.9 or higher matplotlib

Creating an Audiobook (mp3 file) using a Ebook (epub) using BeautifulSoup and Google Text to Speech

epub2audiobook Creating an Audiobook (mp3 file) using a Ebook (epub) using BeautifulSoup and Google Text to Speech Input examples qual a pasta do seu

ADCS cert template modification and ACL enumeration

Purpose This tool is designed to aid an operator in modifying ADCS certificate templates so that a created vulnerable state can be leveraged for privi

Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.
Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.

textgenrnn Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code, or quickly tr

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

T5: Text-To-Text Transfer Transformer The t5 library serves primarily as code for reproducing the experiments in Exploring the Limits of Transfer Lear

Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing 🎉 🎉 🎉 We released the 2.0.0 version with TF2 Support. 🎉 🎉 🎉 If you

Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.
Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.

textgenrnn Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code, or quickly tr

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

T5: Text-To-Text Transfer Transformer The t5 library serves primarily as code for reproducing the experiments in Exploring the Limits of Transfer Lear

Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing 🎉 🎉 🎉 We released the 2.0.0 version with TF2 Support. 🎉 🎉 🎉 If you

Comments
  • Please update docs with example for running JGen.py

    Please update docs with example for running JGen.py

    Hello, this looks interesting and I want to test things out.

    I couldn't run the script in under 1 minute so I'm showing what I did. Possibly a simple copy paste example in the docs will help.

    image

    opened by anrei0000 3
Releases(v0.1)
Owner
Harrison Broadbent
√67
Harrison Broadbent
Ongoing research training transformer language models at scale, including: BERT & GPT-2

Megatron (1 and 2) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA.

NVIDIA Corporation 3.5k Dec 30, 2022
A large-scale (194k), Multiple-Choice Question Answering (MCQA) dataset designed to address realworld medical entrance exam questions.

MedMCQA MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering A large-scale, Multiple-Choice Question Answe

MedMCQA 24 Nov 30, 2022
Code voor mijn Master project omtrent VideoBERT

Code voor masterproef Deze repository bevat de code voor het project van mijn masterproef omtrent VideoBERT. De code in deze repository is gebaseerd o

35 Oct 18, 2021
PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".

LXMERT: Learning Cross-Modality Encoder Representations from Transformers Our servers break again :(. I have updated the links so that they should wor

Hao Tan 838 Dec 19, 2022
Knowledge Management for Humans using Machine Learning & Tags

HyperTag helps humans intuitively express how they think about their files using tags and machine learning. Represent how you think using tags. Find what you look for using semantic search for your t

Ravn Tech, Inc. 166 Jan 07, 2023
Toward Model Interpretability in Medical NLP

Toward Model Interpretability in Medical NLP LING380: Topics in Computational Linguistics Final Project James Cross ( 1 Mar 04, 2022

xFormers is a modular and field agnostic library to flexibly generate transformer architectures by interoperable and optimized building blocks.

Description xFormers is a modular and field agnostic library to flexibly generate transformer architectures by interoperable and optimized building bl

Facebook Research 2.3k Jan 08, 2023
🤖 Basic Financial Chatbot with handoff ability built with Rasa

Financial Services Example Bot This is an example chatbot demonstrating how to build AI assistants for financial services and banking with Rasa. It in

Mohammad Javad Hossieni 4 Aug 10, 2022
NLP command-line assistant powered by OpenAI

NLP command-line assistant powered by OpenAI

Axel 16 Dec 09, 2022
Installation, test and evaluation of Scribosermo speech-to-text engine

Scribosermo STT Setup Scribosermo is a LGPL licensed, open-source speech recognition engine to "Train fast Speech-to-Text networks in different langua

Florian Quirin 3 Jun 20, 2022
Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing

Introduction Funnel-Transformer is a new self-attention model that gradually compresses the sequence of hidden states to a shorter one and hence reduc

GUOKUN LAI 197 Dec 11, 2022
A single model that parses Universal Dependencies across 75 languages.

A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology tags, lemmas, and dependency trees.

Dan Kondratyuk 189 Nov 29, 2022
Beyond Accuracy: Behavioral Testing of NLP models with CheckList

CheckList This repository contains code for testing NLP Models as described in the following paper: Beyond Accuracy: Behavioral Testing of NLP models

Marco Tulio Correia Ribeiro 1.8k Dec 28, 2022
Data loaders and abstractions for text and NLP

torchtext This repository consists of: torchtext.data: Generic data loaders, abstractions, and iterators for text (including vocabulary and word vecto

3.2k Dec 30, 2022
A natural language modeling framework based on PyTorch

Overview PyText is a deep-learning based NLP modeling framework built on PyTorch. PyText addresses the often-conflicting requirements of enabling rapi

Meta Research 6.4k Jan 08, 2023
MiCECo - Misskey Custom Emoji Counter

MiCECo Misskey Custom Emoji Counter Introduction This little script counts custo

7 Dec 25, 2022
ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.

ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.

Antlr Project 13.6k Jan 05, 2023
Modular and extensible speech recognition library leveraging pytorch-lightning and hydra.

Lightning ASR Modular and extensible speech recognition library leveraging pytorch-lightning and hydra What is Lightning ASR • Installation • Get Star

Soohwan Kim 40 Sep 19, 2022
Lightweight utility tools for the detection of multiple spellings, meanings, and language-specific terminology in British and American English

Breame ( British English and American English) Breame is a lightweight Python package with a number of utility tools to aid in the detection of words

Charles 8 Oct 10, 2022
Top2Vec is an algorithm for topic modeling and semantic search.

Top2Vec is an algorithm for topic modeling and semantic search. It automatically detects topics present in text and generates jointly embedded topic, document and word vectors.

Dimo Angelov 2.4k Jan 06, 2023