A tool helps build a talk preview image by combining the given background image and talk event description

Overview

talk-preview-img-builder

A tool helps build a talk preview image by combining the given background image and talk event description

Installation and Usage

Install Dependencies

For running the app, you need to install the following dependencies by following command:

pipenv install -d

Run the Application

Before running the application, you need to prepare the material for building the talk preview images/slides. There are two materials that are required:

  • A background image named background.png which is located in the materials/img folder.

  • A talk event description named speeches.json which is located in the materials/ folder.

After preparing the material, you can run the application by following command:

pipenv run build_talk_preview_img   # build the talk preview images

or

pipenv run build_talk_preview_ppt  # build the talk preview slides

The generated talk preview images and slides are located in the export/ folder.

Configuring the Application

There are several options to configure the application, the default values are shown in the config.py file. You can override the default values by editing the config.py file or adding a .env file that setting theses variables before running the app.

Variable Description Default Value (Setting for Image/ Setting for Slides) Type (Setting for Image/ Setting for Slides)
BACKGROUND_IMG_PATH The path to the background image materials/img/background.png String
SPEECHES_PATH The path to the speech description materials/speeches.json String
PREVIEW_IMG_WIDTH The width of the generated preview image 700px / 30cm Integer / Float
PREVIEW_IMG_HEIGHT The height of the generated preview image 700px / 30cm Integer / Float
PREVIEW_IMG_TITLE_UPPER_LEFT_X The left position of the title in the upper left corner of the generated preview image 110px / 0.95cm Integer / Float
PREVIEW_IMG_TITLE_UPPER_LEFT_Y The top position of the title in the upper left corner of the generated preview image 110px / 1.04cm Integer / Float
PREVIEW_IMG_CONTENT_UPPER_LEFT_X The left position of the content in the upper left corner of the generated preview image 85px / 1.38cm Integer / Float
PREVIEW_IMG_CONTENT_UPPER_LEFT_Y The top position of the content in the upper left corner of the generated preview image 200px / 3.8cm Integer / Float
PREVIEW_IMG_FOOTER_UPPER_LEFT_X The left position of the footer in the upper left corner of the generated preview image 100px / 1.6cm Integer / Float
PREVIEW_IMG_FOOTER_UPPER_LEFT_Y The top position of the footer in the upper left corner of the generated preview image 650px / 12.2cm Integer / Float
PREVIEW_IMG_SPEAKER_UPPER_RIGHT_X The right position of the speaker name in the upper right corner of the generated preview image 600px / 13.5cm Integer / Float
PREVIEW_IMG_SPEAKER_UPPER_RIGHT_Y The top position of the speaker name in the upper right corner of the generated preview image 570px / 10cm Integer / Float
TITLE_HEIGHT The height of the title 70px / 1.84cm Integer / Float
CONTENT_HEIGHT The height of the content 90px / 7.5cm Integer / Float
PREVIEW_TEXT_COLOR The color of text used in the preview image #080A42 String
PREVIEW_HIGHTLIGHT_TEXT_COLOR The highlight color of text used in the preview image #EBCC73 String
PREVIEW_TEXT_FONT The font used in the preview image "PingFang.ttc"/"Taipei Sans TC Beta" String
PREVIEW_TEXT_BOLD_FONT The bold font used in the preview image "PingFang.ttc"/"Taipei Sans TC Beta" String

Coding Style

The coding style of the application is PEP8. You can use the following command to check the coding style:

pipenv run lint

and the following command to reformat the coding style which is leveraged by black and isort:

pipenv run reformat

TODO

  • Automatically generate the talk preview metadata file (e.g. speeches.json) from the PyConTW API server.
  • Implement hybrid language support text wrapping in title and content of the talk preview image.
  • Implement dynamic font size adjustment in the title and content of the talk preview image depending on the length of words.
  • Implement CI workflow by using GitHub Actions
Owner
PyCon Taiwan
PyCon Taiwan
숭실대학교 컴퓨터학부 전공종합설계프로젝트

✨ 시각장애인을 위한 버스도착 알림 장치 ✨ 👀 개요 현대 사회에서 대중교통 위치 정보를 이용하여 사람들이 간단하게 이용할 대중교통의 정보를 얻고 쉽게 대중교통을 이용할 수 있다. 해당 정보는 각종 어플리케이션과 대중교통 이용시설에서 위치 정보를 제공하고 있지만 시각

taegyun 3 Jan 25, 2022
HuggingSound: A toolkit for speech-related tasks based on HuggingFace's tools

HuggingSound HuggingSound: A toolkit for speech-related tasks based on HuggingFace's tools. I have no intention of building a very complex tool here.

Jonatas Grosman 247 Dec 26, 2022
fastai ulmfit - Pretraining the Language Model, Fine-Tuning and training a Classifier

fast.ai ULMFiT with SentencePiece from pretraining to deployment Motivation: Why even bother with a non-BERT / Transformer language model? Short answe

Florian Leuerer 26 May 27, 2022
Text-Based zombie apocalyptic decision-making game in Python

Inspiration We shared university first year game coursework.[to gauge previous experience and start brainstorming] Adapted a particular nuclear fallou

Amin Sabbagh 2 Feb 17, 2022
Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch

N-Grammer - Pytorch Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch Install $ pip install n-grammer-pytorch Usage

Phil Wang 66 Dec 29, 2022
An extensive UI tool built using new data scraped from BBC News

BBC-News-Analyzer An extensive UI tool built using new data scraped from BBC New

Antoreep Jana 1 Dec 31, 2021
CPC-big and k-means clustering for zero-resource speech processing

The CPC-big model and k-means checkpoints used in Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing.

Benjamin van Niekerk 5 Nov 23, 2022
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation

CPT This repository contains code and checkpoints for CPT. CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Gener

fastNLP 342 Jan 05, 2023
Python interface for converting Penn Treebank trees to Stanford Dependencies and Universal Depenencies

PyStanfordDependencies Python interface for converting Penn Treebank trees to Universal Dependencies and Stanford Dependencies. Example usage Start by

David McClosky 64 May 08, 2022
Задания КЕГЭ по информатике 2021 на Python

КЕГЭ 2021 на Python В этом репозитории мои решения типовых заданий КЕГЭ по информатике в 2021 году, БЕСПЛАТНО! Задания Взяты с https://inf-ege.sdamgia

8 Oct 13, 2022
Twewy-discord-chatbot - Build a Discord AI Chatbot that Speaks like Your Favorite Character

Build a Discord AI Chatbot that Speaks like Your Favorite Character! This is a Discord AI Chatbot that uses the Microsoft DialoGPT conversational mode

Lynn Zheng 231 Dec 30, 2022
Dense Passage Retriever - is a set of tools and models for open domain Q&A task.

Dense Passage Retrieval Dense Passage Retrieval (DPR) - is a set of tools and models for state-of-the-art open-domain Q&A research. It is based on the

Meta Research 1.1k Jan 07, 2023
CCKS-Title-based-large-scale-commodity-entity-retrieval-top1

- 基于标题的大规模商品实体检索top1 一、任务介绍 CCKS 2020:基于标题的大规模商品实体检索,任务为对于给定的一个商品标题,参赛系统需要匹配到该标题在给定商品库中的对应商品实体。 输入:输入文件包括若干行商品标题。 输出:输出文本每一行包括此标题对应的商品实体,即给定知识库中商品 ID,

43 Nov 11, 2022
Spooky Skelly For Python

_____ _ _____ _ _ _ | __| ___ ___ ___ | |_ _ _ | __|| |_ ___ | || | _ _ |__ || . || . || . || '

Kur0R1uka 1 Dec 23, 2021
Train BPE with fastBPE, and load to Huggingface Tokenizer.

BPEer Train BPE with fastBPE, and load to Huggingface Tokenizer. Description The BPETrainer of Huggingface consumes a lot of memory when I am training

Lizhuo 1 Dec 23, 2021
BERN2: an advanced neural biomedical namedentity recognition and normalization tool

BERN2 We present BERN2 (Advanced Biomedical Entity Recognition and Normalization), a tool that improves the previous neural network-based NER tool by

DMIS Laboratory - Korea University 99 Jan 06, 2023
TensorFlow code and pre-trained models for BERT

BERT ***** New March 11th, 2020: Smaller BERT Models ***** This is a release of 24 smaller BERT models (English only, uncased, trained with WordPiece

Google Research 32.9k Jan 08, 2023
An automated program that helps customers of Pizza Palour place their pizza orders

PIzza_Order_Assistant Introduction An automated program that helps customers of Pizza Palour place their pizza orders. The program uses voice commands

Tindi Sommers 1 Dec 26, 2021
DeLighT: Very Deep and Light-Weight Transformers

DeLighT: Very Deep and Light-weight Transformers This repository contains the source code of our work on building efficient sequence models: DeFINE (I

Sachin Mehta 440 Dec 18, 2022