📔️ Generate a text-based journal from a template file.

Overview

JGen 📔️

Generate a text-based journal from a template file.

Contents

Getting Started

  1. Clone this repository -
  • git clone https://github.com/harrison-broadbent/JGen.git
  1. Edit "template.txt", copy and paste an example from /templates, or use the placeholder template -
  • vim template.txt
  1. Run JGen and follow the prompts -
  • python3 JGen.py
  1. Inspect "journal.txt" -
  • vim journal.txt

Example

Given the following template (available as templates/template_weekly.txt) -

_____________________________
Week: WEEKNUM, Year: YY
DD_NAME, DD MM_NAME - +++++++
DD_NAME, DD MM_NAME

Todos: - - -

Plans: - - -

and running JGen for two entries gives us -

_____________________________
Week: 10, Year: 2021
Saturday, 13 March -
Saturday, 20 March

Todos:
	-
	-
	-

Plans:
	-
	-
	-


_____________________________
Week: 11, Year: 2021
Saturday, 20 March -
Saturday, 27 March

Todos:
	-
	-
	-

Plans:
	-
	-
	-

Lets break down what happened -

  1. JGen sets it's internal date - "today's" date, from your perspective.
  2. JGen runs through line 1 and line 2 of template.txt, replacing keywords with their corresponding information and then writing the output to journal.txt.
  3. At the end of line 2 there are seven + (plus) symbols
    • JGen removes these from the output, and increments the internal date counter by 7 days.
  4. JGen fills out line 3 with the new date information, then fills out the rest of the information for the first entry.
  5. It then repeats this for the second entry, carrying over the date from the end of the first entry.
  6. JGen halts, with journal.txt containing our final output.

Overview

JGen parses a given template file to generate a journal file.

JGen runs through the template file and replaces keywords with their actual values (dates - day/month/year etc.), for a specified number of entries.

Usage

The JGen Python script contains all the code for the parser. To get started:

  • Download the JGen script.

  • Create a template.txt file (or download and rename one of the examples in /templates), and place it in the same directory as the JGen Python script.

    • See Details below for more information on creating a template.

    • See an Example to walk through a specific example of a template file.

  • Run the JGen Python script, and input the number of times the template should be reproduced.

    • Ex: 365 entries for a daily journal spanning a year, 52 entries for a weekly journal
  • journal.txt will be populated with text based on the template and the number of entries specified.

Details

See the Example section below if you want to jump straight into seeing how JGen works, by walking though an example.

JGen parses the template file, replacing any of the reserved keywords, shown below, with their corresponding date values.

Part of the templating process is to indicate using a (+) symbol when to increment the internal date counter, which JGen picks up as it parses the file. It also strips all (+) symbols from the file.

Reserved Keywords

  • DD

    • The date number.
    • 01, 05, 10, 21 etc.
  • MM

    • The month number.
    • 01, 10, 12 etc.
  • YY

    • The year.
    • 2020, 2021 etc.
  • DD_NAME

    • The name of the day.
    • Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday
  • MM_NAME

    • January, February etc.
  • DAYNUM

    • Day number of the year.
    • 123, 340 etc.
  • WEEKNUM

    • Week number of the year.
    • 13, 51 etc.
  • +

    • used to increment the internal date counter

    • will only increment after the entire line has been parsed

      • for example, parsing
      DD/MM/YY+ - DD/MM/YY
      

      would give

      21/02/2050 - 21/02/2050
      

      and not

      21/02/2050 - 28/02/2050
      

Gotchas

  • + can only be used to increment the date.

    • All + symbols are removed from the output.
    • ie. journal.txt file will never contain a + character
  • As mentioned in the "reserved keywords" section of this readme, the + characters are only interpreted at the end of a line.

    • Currently, to work around this, just place the second date on a new line (like in templates/template_weekly.txt)

    • For example, parsing

      DD/MM/YY+ - DD/MM/YY
      

      would give

      21/02/2050 - 21/02/2050
      

      and not

      21/02/2050 - 28/02/2050
      
You might also like...
Count the frequency of letters or words in a text file and show a graph.

Word Counter By EBUS Coding Club Count the frequency of letters or words in a text file and show a graph. Requirements Python 3.9 or higher matplotlib

Creating an Audiobook (mp3 file) using a Ebook (epub) using BeautifulSoup and Google Text to Speech

epub2audiobook Creating an Audiobook (mp3 file) using a Ebook (epub) using BeautifulSoup and Google Text to Speech Input examples qual a pasta do seu

ADCS cert template modification and ACL enumeration

Purpose This tool is designed to aid an operator in modifying ADCS certificate templates so that a created vulnerable state can be leveraged for privi

Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.
Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.

textgenrnn Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code, or quickly tr

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

T5: Text-To-Text Transfer Transformer The t5 library serves primarily as code for reproducing the experiments in Exploring the Limits of Transfer Lear

Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing 🎉 🎉 🎉 We released the 2.0.0 version with TF2 Support. 🎉 🎉 🎉 If you

Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.
Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.

textgenrnn Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code, or quickly tr

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

T5: Text-To-Text Transfer Transformer The t5 library serves primarily as code for reproducing the experiments in Exploring the Limits of Transfer Lear

Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing 🎉 🎉 🎉 We released the 2.0.0 version with TF2 Support. 🎉 🎉 🎉 If you

Comments
  • Please update docs with example for running JGen.py

    Please update docs with example for running JGen.py

    Hello, this looks interesting and I want to test things out.

    I couldn't run the script in under 1 minute so I'm showing what I did. Possibly a simple copy paste example in the docs will help.

    image

    opened by anrei0000 3
Releases(v0.1)
Owner
Harrison Broadbent
√67
Harrison Broadbent
BERT, LDA, and TFIDF based keyword extraction in Python

BERT, LDA, and TFIDF based keyword extraction in Python kwx is a toolkit for multilingual keyword extraction based on Google's BERT and Latent Dirichl

Andrew Tavis McAllister 41 Dec 27, 2022
A combination of autoregressors and autoencoders using XLNet for sentiment analysis

A combination of autoregressors and autoencoders using XLNet for sentiment analysis Abstract In this paper sentiment analysis has been performed in or

James Zaridis 2 Nov 20, 2021
Repository to hold code for the cap-bot varient that is being presented at the SIIC Defence Hackathon 2021.

capbot-siic Repository to hold code for the cap-bot varient that is being presented at the SIIC Defence Hackathon 2021. Problem Inspiration A plethora

Aryan Kargwal 19 Feb 17, 2022
A versatile token stream for handwritten parsers.

Writing recursive-descent parsers by hand can be quite elegant but it's often a bit more verbose than expected, especially when it comes to handling indentation and reporting proper syntax errors. Th

Valentin Berlier 8 Nov 30, 2022
Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec

Wake Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec Abstract استخراج خودکار کلمات کلیدی متون کوتاه فارسی با استفاده از word2vec ب

Omid Hajipoor 1 Dec 17, 2021
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
Retraining OpenAI's GPT-2 on Discord Chats

Train OpenAI's GPT-2 on Discord Chats Retraining a Text Generation Model on Discord Chats using gpt-2-simple that wraps existing model fine-tuning and

Ayush Mishra 4 Oct 27, 2022
L3Cube-MahaCorpus a Marathi monolingual data set scraped from different internet sources.

L3Cube-MahaCorpus L3Cube-MahaCorpus a Marathi monolingual data set scraped from different internet sources. We expand the existing Marathi monolingual

21 Dec 17, 2022
Machine Psychology: Python Generated Art

Machine Psychology: Python Generated Art A limited collection of 64 algorithmically generated artwork. Each unique piece is then given a title by the

Pixegami Team 67 Dec 13, 2022
A Python 3.6+ package to run .many files, where many programs written in many languages may exist in one file.

RunMany Intro | Installation | VSCode Extension | Usage | Syntax | Settings | About A tool to run many programs written in many languages from one fil

6 May 22, 2022
Multi-Scale Temporal Frequency Convolutional Network With Axial Attention for Speech Enhancement

MTFAA-Net Unofficial PyTorch implementation of Baidu's MTFAA-Net: "Multi-Scale Temporal Frequency Convolutional Network With Axial Attention for Speec

Shimin Zhang 87 Dec 19, 2022
Learning Spatio-Temporal Transformer for Visual Tracking

STARK The official implementation of the paper Learning Spatio-Temporal Transformer for Visual Tracking Highlights The strongest performances Tracker

Multimedia Research 485 Jan 04, 2023
Connectionist Temporal Classification (CTC) decoding algorithms: best path, beam search, lexicon search, prefix search, and token passing. Implemented in Python.

CTC Decoding Algorithms Update 2021: installable Python package Python implementation of some common Connectionist Temporal Classification (CTC) decod

Harald Scheidl 736 Jan 03, 2023
FB ID CLONER WUTHOT CHECKPOINT, FACEBOOK ID CLONE FROM FILE

* MY SOCIAL MEDIA : Programming And Memes Want to contact Mr. Error ? CONTACT : [ema

Mr. Error 9 Jun 17, 2021
CPC-big and k-means clustering for zero-resource speech processing

The CPC-big model and k-means checkpoints used in Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing.

Benjamin van Niekerk 5 Nov 23, 2022
In this Notebook I've build some machine-learning and deep-learning to classify corona virus tweets, in both multi class classification and binary classification.

Hello, This Notebook Contains Example of Corona Virus Tweets Multi Class Classification. - Classes is: Extremely Positive, Positive, Extremely Negativ

Khaled Tofailieh 3 Dec 06, 2022
nlpcommon is a python Open Source Toolkit for text classification.

nlpcommon nlpcommon, Python Text Tool. Guide Feature Install Usage Dataset Contact Cite Reference Feature nlpcommon is a python Open Source

xuming 3 May 29, 2022
A toolkit for document-level event extraction, containing some SOTA model implementations

Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker Source code for ACL-IJCNLP 2021 Long paper: Document-le

84 Dec 15, 2022
Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision

Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation

Chenyang Huang 37 Jan 04, 2023
Pytorch-version BERT-flow: One can apply BERT-flow to any PLM within Pytorch framework.

Pytorch-version BERT-flow: One can apply BERT-flow to any PLM within Pytorch framework.

Ubiquitous Knowledge Processing Lab 59 Dec 01, 2022