📔️ Generate a text-based journal from a template file.

Overview

JGen 📔️

Generate a text-based journal from a template file.

Contents

Getting Started

  1. Clone this repository -
  • git clone https://github.com/harrison-broadbent/JGen.git
  1. Edit "template.txt", copy and paste an example from /templates, or use the placeholder template -
  • vim template.txt
  1. Run JGen and follow the prompts -
  • python3 JGen.py
  1. Inspect "journal.txt" -
  • vim journal.txt

Example

Given the following template (available as templates/template_weekly.txt) -

_____________________________
Week: WEEKNUM, Year: YY
DD_NAME, DD MM_NAME - +++++++
DD_NAME, DD MM_NAME

Todos: - - -

Plans: - - -

and running JGen for two entries gives us -

_____________________________
Week: 10, Year: 2021
Saturday, 13 March -
Saturday, 20 March

Todos:
	-
	-
	-

Plans:
	-
	-
	-


_____________________________
Week: 11, Year: 2021
Saturday, 20 March -
Saturday, 27 March

Todos:
	-
	-
	-

Plans:
	-
	-
	-

Lets break down what happened -

  1. JGen sets it's internal date - "today's" date, from your perspective.
  2. JGen runs through line 1 and line 2 of template.txt, replacing keywords with their corresponding information and then writing the output to journal.txt.
  3. At the end of line 2 there are seven + (plus) symbols
    • JGen removes these from the output, and increments the internal date counter by 7 days.
  4. JGen fills out line 3 with the new date information, then fills out the rest of the information for the first entry.
  5. It then repeats this for the second entry, carrying over the date from the end of the first entry.
  6. JGen halts, with journal.txt containing our final output.

Overview

JGen parses a given template file to generate a journal file.

JGen runs through the template file and replaces keywords with their actual values (dates - day/month/year etc.), for a specified number of entries.

Usage

The JGen Python script contains all the code for the parser. To get started:

  • Download the JGen script.

  • Create a template.txt file (or download and rename one of the examples in /templates), and place it in the same directory as the JGen Python script.

    • See Details below for more information on creating a template.

    • See an Example to walk through a specific example of a template file.

  • Run the JGen Python script, and input the number of times the template should be reproduced.

    • Ex: 365 entries for a daily journal spanning a year, 52 entries for a weekly journal
  • journal.txt will be populated with text based on the template and the number of entries specified.

Details

See the Example section below if you want to jump straight into seeing how JGen works, by walking though an example.

JGen parses the template file, replacing any of the reserved keywords, shown below, with their corresponding date values.

Part of the templating process is to indicate using a (+) symbol when to increment the internal date counter, which JGen picks up as it parses the file. It also strips all (+) symbols from the file.

Reserved Keywords

  • DD

    • The date number.
    • 01, 05, 10, 21 etc.
  • MM

    • The month number.
    • 01, 10, 12 etc.
  • YY

    • The year.
    • 2020, 2021 etc.
  • DD_NAME

    • The name of the day.
    • Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday
  • MM_NAME

    • January, February etc.
  • DAYNUM

    • Day number of the year.
    • 123, 340 etc.
  • WEEKNUM

    • Week number of the year.
    • 13, 51 etc.
  • +

    • used to increment the internal date counter

    • will only increment after the entire line has been parsed

      • for example, parsing
      DD/MM/YY+ - DD/MM/YY
      

      would give

      21/02/2050 - 21/02/2050
      

      and not

      21/02/2050 - 28/02/2050
      

Gotchas

  • + can only be used to increment the date.

    • All + symbols are removed from the output.
    • ie. journal.txt file will never contain a + character
  • As mentioned in the "reserved keywords" section of this readme, the + characters are only interpreted at the end of a line.

    • Currently, to work around this, just place the second date on a new line (like in templates/template_weekly.txt)

    • For example, parsing

      DD/MM/YY+ - DD/MM/YY
      

      would give

      21/02/2050 - 21/02/2050
      

      and not

      21/02/2050 - 28/02/2050
      
You might also like...
Count the frequency of letters or words in a text file and show a graph.

Word Counter By EBUS Coding Club Count the frequency of letters or words in a text file and show a graph. Requirements Python 3.9 or higher matplotlib

Creating an Audiobook (mp3 file) using a Ebook (epub) using BeautifulSoup and Google Text to Speech

epub2audiobook Creating an Audiobook (mp3 file) using a Ebook (epub) using BeautifulSoup and Google Text to Speech Input examples qual a pasta do seu

ADCS cert template modification and ACL enumeration

Purpose This tool is designed to aid an operator in modifying ADCS certificate templates so that a created vulnerable state can be leveraged for privi

Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.
Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.

textgenrnn Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code, or quickly tr

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

T5: Text-To-Text Transfer Transformer The t5 library serves primarily as code for reproducing the experiments in Exploring the Limits of Transfer Lear

Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing 🎉 🎉 🎉 We released the 2.0.0 version with TF2 Support. 🎉 🎉 🎉 If you

Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.
Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.

textgenrnn Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code, or quickly tr

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

T5: Text-To-Text Transfer Transformer The t5 library serves primarily as code for reproducing the experiments in Exploring the Limits of Transfer Lear

Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing 🎉 🎉 🎉 We released the 2.0.0 version with TF2 Support. 🎉 🎉 🎉 If you

Comments
  • Please update docs with example for running JGen.py

    Please update docs with example for running JGen.py

    Hello, this looks interesting and I want to test things out.

    I couldn't run the script in under 1 minute so I'm showing what I did. Possibly a simple copy paste example in the docs will help.

    image

    opened by anrei0000 3
Releases(v0.1)
Owner
Harrison Broadbent
√67
Harrison Broadbent
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System

Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System Authors: Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai

Amazon Web Services - Labs 124 Jan 03, 2023
Sentence boundary disambiguation tool for Japanese texts (日本語文境界判定器)

Bunkai Bunkai is a sentence boundary (SB) disambiguation tool for Japanese texts. Quick Start $ pip install bunkai $ echo -e '宿を予約しました♪!まだ2ヶ月も先だけど。早すぎ

Megagon Labs 160 Dec 23, 2022
Reformer, the efficient Transformer, in Pytorch

Reformer, the Efficient Transformer, in Pytorch This is a Pytorch implementation of Reformer https://openreview.net/pdf?id=rkgNKkHtvB It includes LSH

Phil Wang 1.8k Dec 30, 2022
CoSENT、STS、SentenceBERT

CoSENT_Pytorch 比Sentence-BERT更有效的句向量方案

102 Dec 07, 2022
Prithivida 690 Jan 04, 2023
NLP Text Classification

多标签文本分类任务 近年来随着深度学习的发展,模型参数的数量飞速增长。为了训练这些参数,需要更大的数据集来避免过拟合。然而,对于大部分NLP任务来说,构建大规模的标注数据集非常困难(成本过高),特别是对于句法和语义相关的任务。相比之下,大规模的未标注语料库的构建则相对容易。为了利用这些数据,我们可以

Jason 1 Nov 11, 2021
AI Assistant for Building Reliable, High-performing and Fair Multilingual NLP Systems

AI Assistant for Building Reliable, High-performing and Fair Multilingual NLP Systems

Microsoft 37 Nov 29, 2022
Chinese real time voice cloning (VC) and Chinese text to speech (TTS).

Chinese real time voice cloning (VC) and Chinese text to speech (TTS). 好用的中文语音克隆兼中文语音合成系统,包含语音编码器、语音合成器、声码器和可视化模块。

Kuang Dada 6 Nov 08, 2022
Application for shadowing Chinese.

chinese-shadowing Simple APP for shadowing chinese. With this application, it is very easy to record yourself, play the sound recorded and listen to s

Thomas Hirtz 5 Sep 06, 2022
LSTM based Sentiment Classification using Tensorflow - Amazon Reviews Rating

LSTM based Sentiment Classification using Tensorflow - Amazon Reviews Rating (Dataset) The dataset is from Amazon Review Data (2018)

Immanuvel Prathap S 1 Jan 16, 2022
Code for "Finetuning Pretrained Transformers into Variational Autoencoders"

transformers-into-vaes Code for Finetuning Pretrained Transformers into Variational Autoencoders (our submission to NLP Insights Workshop 2021). Gathe

Seongmin Park 22 Nov 26, 2022
ElasticBERT: A pre-trained model with multi-exit transformer architecture.

This repository contains finetuning code and checkpoints for ElasticBERT. Towards Efficient NLP: A Standard Evaluation and A Strong Baseli

fastNLP 48 Dec 14, 2022
Data preprocessing rosetta parser for python

datapreprocessing_rosetta_parser I've never done any NLP or text data processing before, so I wanted to use this hackathon as a learning opportunity,

ASReview hackathon for Follow the Money 2 Nov 28, 2021
Codes to pre-train Japanese T5 models

t5-japanese Codes to pre-train a T5 (Text-to-Text Transfer Transformer) model pre-trained on Japanese web texts. The model is available at https://hug

Megagon Labs 37 Dec 25, 2022
2021 2학기 데이터크롤링 기말프로젝트

공지 주제 웹 크롤링을 이용한 취업 공고 스케줄러 스케줄 주제 정하기 코딩하기 핵심 코드 설명 + 피피티 구조 구상 // 12/4 토 피피티 + 스크립트(대본) 제작 + 녹화 // ~ 12/10 ~ 12/11 금~토 영상 편집 // ~12/11 토 웹크롤러 사람인_평균

Choi Eun Jeong 2 Aug 16, 2022
Codes for processing meeting summarization datasets AMI and ICSI.

Meeting Summarization Dataset Meeting plays an essential part in our daily life, which allows us to share information and collaborate with others. Wit

xcfeng 39 Dec 14, 2022
Conversational-AI-ChatBot - Intelligent ChatBot built with Microsoft's DialoGPT transformer to make conversations with human users!

Conversational AI ChatBot Intelligent ChatBot built with Microsoft's DialoGPT transformer to make conversations with human users! In this project? Thi

Rajkumar Lakshmanamoorthy 6 Nov 30, 2022
Py65 65816 - Add support for the 65C816 to py65

Add support for the 65C816 to py65 Py65 (https://github.com/mnaberez/py65) is a

4 Jan 04, 2023
A sample project that exists for PyPUG's "Tutorial on Packaging and Distributing Projects"

A sample Python project A sample project that exists as an aid to the Python Packaging User Guide's Tutorial on Packaging and Distributing Projects. T

Python Packaging Authority 4.5k Dec 30, 2022