A simple flask application to collect annotations for the Turing Change Point Dataset, a benchmark dataset for change point detection algorithms

Overview

AnnotateChange

Welcome to the repository of the "AnnotateChange" application. This application was created to collect annotations of time series data in order to construct the Turing Change Point Dataset (TCPD). The TCPD is a dataset of real-world time series used to evaluate change point detection algorithms. For the change point detection benchmark that was created using this dataset, see the Turing Change Point Detection Benchmark repository.

Any work that uses this repository should cite our paper: Van den Burg & Williams - An Evaluation of Change Point Detection Algorithms (2020). You can use the following BibTeX entry:

@article{vandenburg2020evaluation,
        title={An Evaluation of Change Point Detection Algorithms},
        author={{Van den Burg}, G. J. J. and Williams, C. K. I.},
        journal={arXiv preprint arXiv:2003.06222},
        year={2020}
}

Here's a screenshot of what the application looks like during the annotation process:

screenshot of 
AnnotateChange

Some of the features of AnnotateChange include:

  • Admin panel to add/remove datasets, add/remove annotation tasks, add/remove users, and inspect incoming annotations.

  • Basic user management: authentication, email confirmation, forgotten password, automatic log out after inactivity, etc. Users are only allowed to register using an email address from an approved domain.

  • Task assignment of time series to user is done on the fly, ensuring no user ever annotates the same dataset twice, and prioritising datasets that are close to a desired number of annotations.

  • Interactive graph of a time series that supports pan and zoom, support for multidimensional time series.

  • Mandatory "demo" to onboard the user to change point annotation.

  • Backup of annotations to the admin via email.

  • Time series datasets are verified upon upload acccording to a strict schema.

Getting Started

Below are instructions for setting up the application for local development and for running the application with Docker.

Basic

AnnotateChange can be launched quickly for local development as follows:

  1. Clone the repo

    $ git clone https://github.com/alan-turing-institute/AnnotateChange
    $ cd AnnotateChange
    
  2. Set up a virtual environment and install dependencies (requires Python 3.7+)

    $ sudo apt-get install -y python3-venv # assuming Ubuntu
    $ pip install wheel
    $ python3 -m venv ./venv
    $ source ./venv/bin/activate
    $ pip install -r requirements.txt
    
  3. Create local development environment file

    $ cp .env.example .env.development
    $ sed -i 's/DB_TYPE=mysql/DB_TYPE=sqlite3/g' .env.development
    

    With DB_TYPE=sqlite3, we don't have to deal with MySQL locally.

  4. Initialize the database (this will be a local app.db file).

    $ ./flask.sh db upgrade
    
  5. Create the admin user account

    $ ./flask.sh admin add --auto-confirm-email
    

    The --auto-confirm-email flag automatically marks the email address of the admin user as confirmed. This is mostly useful in development environments when you don't have a mail address set up yet.

  6. Run the application

    $ ./flask.sh run
    

    This should tell you where its running, probably localhost:5000. You should be able to log in with the admin account you've just created.

  7. As admin, upload ALL demo datasets (included in demo_data) through: Admin Panel -> Add dataset. You should then be able to follow the introduction to the app (available from the landing page).

  8. After completing the instruction, you then will be able to access the user interface ("Home") to annotate your own time series.

Docker

To use AnnotateChange locally using Docker, follow the steps below. For a full-fledged installation on a server, see the deployment instructions.

  1. Install docker and docker-compose.

  2. Clone this repository and switch to it:

    $ git clone https://github.com/alan-turing-institute/AnnotateChange
    $ cd AnnotateChange
    
  3. Build the docker image:

    $ docker build -t gjjvdburg/annotatechange .
    
  4. Create the directory for persistent MySQL database storage:

    $ mkdir -p persist/{instance,mysql}
    $ sudo chown :1024 persist/instance
    $ chmod 775 persist/instance
    $ chmod g+s persist/instance
    
  5. Copy the environment variables file:

    $ cp .env.example .env
    

    Some environment variables can be adjusted if needed. For example, when moving to production, you'll need to change the FLASK_ENV variable accordingly. Please also make sure to set a proper SECRET_KEY and AC_MYSQL_PASSWORD (= MYSQL_PASSWORD). You'll also need to configure a mail account so the application can send out emails for registration etc. This is what the variables prefixed with MAIL_ are for. The ADMIN_EMAIL is likely your own email, it is used when the app encounters an error and to send backups of the annotation records. You can limit the email domains users can use with the USER_EMAIL_DOMAINS variable. See the config.py file for more info on the configuration options.

  6. Create a local docker network for communiation between the AnnotateChange app and the MySQL server:

    $ docker network create web
    
  7. Launch the services with docker-compose

    $ docker-compose up
    

    You may need to wait 2 minutes here before the database is initialized. If all goes well, you should be able to point your browser to localhost:7831 and see the landing page of the application. Stop the service before continuing to the next step (by pressing Ctrl+C).

  8. Once you have the app running, you'll want to create an admin account so you can upload datasets, manage tasks and users, and download annotation results. This can be done using the following command:

    $ docker-compose run --entrypoint 'flask admin add --auto-confirm-email' annotatechange
    
  9. As admin, upload ALL demo datasets (included in demo_data) through: Admin Panel -> Add dataset. You should then be able to follow the introduction to the app (available from the landing page).

  10. After completing the instruction, you then will be able to access the user interface ("Home") to annotate your own time series.

Notes

This codebase is provided "as is". If you find any problems, please raise an issue on GitHub.

The code is licensed under the MIT License.

This code was written by Gertjan van den Burg with helpful comments provided by Chris Williams.

Some implementation details

Below are some thoughts that may help make sense of the codebase.

  • AnnotateChange is a web application build on the Flask framework. See this excellent tutorial for an introduction to Flask. The flask.sh shell script loads the appropriate environment variables and runs the application.

  • The application handles user management and is centered around the idea of a "task" which links a particular user to a particular time series to annotate.

  • An admin role is available, and the admin user can manually assign and delete tasks as well as add/delete users, datasets, etc. The admin user is created using the cli (see the Getting Started documentation above).

  • All datasets must adhere to a specific dataset schema (see utils/dataset_schema.json). See the files in [demo_data] for examples, as well as those in TCPD.

  • Annotations are stored in the database using 0-based indexing. Tasks are assigned on the fly when a user requests a time series to annotate (see utils/tasks.py).

  • Users can only begin annotating when they have successfully passed the introduction.

  • Configuration of the app is done through environment variables, see the .env.example file for an example.

  • Docker is used for deployment (see the deployment documentation in docs), and Traefik is used for SSL, etc.

  • The time series graph is plotted using d3.js.

Owner
The Alan Turing Institute
The UK's national institute for data science and artificial intelligence.
The Alan Turing Institute
A Python package develop for transportation spatio-temporal big data processing, analysis and visualization.

English 中文版 TransBigData Introduction TransBigData is a Python package developed for transportation spatio-temporal big data processing, analysis and

Qing Yu 251 Jan 03, 2023
This is the repository that includes the code material for the ESweek 2021 for the Education Class Lecture A3 "Learn to Drive (and Race!) Autonomous Vehicles"

ESweek2021_educationclassA3 This is the repository that includes the code material for the ESweek 2021 for the Education Class Lecture A3 "Learn to Dr

F1TENTH Autonomous Racing Community 29 Dec 06, 2022
Code for our SIGIR 2022 accepted paper : P3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning

P3 Ranker Implementation for our SIGIR2022 accepted paper: P3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-bas

14 Jan 04, 2023
Contains the assignments from the course Building a Modern Computer from First Principles: From Nand to Tetris.

Contains the assignments from the course Building a Modern Computer from First Principles: From Nand to Tetris.

Matheus Rodrigues 1 Jan 20, 2022
A collection of simple python mini projects to enhance your python skills

A collection of simple python mini projects to enhance your python skills

PYTHON WORLD 12.1k Jan 05, 2023
A curated list of awesome mathematics resources

A curated list of awesome mathematics resources

Cyrille Rossant 6.7k Jan 05, 2023
My solutions to the Advent of Code 2021 problems in Go and Python 🎄

🎄 Advent of Code 2021 🎄 Summary Advent of Code is an annual Advent calendar of programming puzzles. This year I am doing it in Go and Python. Runnin

Orfeas Antoniou 16 Jun 16, 2022
Fastest Git client for Emacs.

EAF Git Client EAF Git is git client application for the Emacs Application Framework. The advantages of EAF Git are: Large log browse: support 1 milli

Emacs Application Framework 31 Dec 02, 2022
🌱 Complete API wrapper of Seedr.cc

Python API Wrapper of Seedr.cc Table of Contents Installation How I got the API endpoints? Start Guide Getting Token Logging with Username and Passwor

Hemanta Pokharel 43 Dec 26, 2022
Legacy python processor for AsciiDoc

AsciiDoc.py This branch is tracking the alpha, in-progress 10.x release. For the stable 9.x code, please go to the 9.x branch! AsciiDoc is a text docu

AsciiDoc.py 178 Dec 25, 2022
The sarge package provides a wrapper for subprocess which provides command pipeline functionality.

Overview The sarge package provides a wrapper for subprocess which provides command pipeline functionality. This package leverages subprocess to provi

Vinay Sajip 14 Dec 18, 2022
Projeto em Python colaborativo para o Bootcamp de Dados do Itaú em parceria com a Lets Code

🧾 lets-code-todo-list por Henrique V. Domingues e Josué Montalvão Projeto em Python colaborativo para o Bootcamp de Dados do Itaú em parceria com a L

Henrique V. Domingues 1 Jan 11, 2022
Gtech μLearn Sample_bot

Ser_bot Gtech μLearn Sample_bot Do Greet a newly joined member in a channel (random message) While adding a reaction to a message send a message to a

Jerin Paul 1 Jan 19, 2022
Netbox Dns is a netbox plugin for managing zone, nameserver and record inventory.

Netbox DNS Netbox Dns is a netbox plugin for managing zone, nameserver and record inventory. Features Manage zones (domains) you have. Manage nameserv

Aurora Research Lab 155 Jan 06, 2023
Documentation for GitHub Copilot

NOTE: GitHub Copilot discussions have moved to the Copilot Feedback forum. GitHub Copilot Welcome to the GitHub Copilot user community! In this reposi

GitHub 21.3k Dec 28, 2022
Generate modern Python clients from OpenAPI

openapi-python-client Generate modern Python clients from OpenAPI 3.x documents. This generator does not support OpenAPI 2.x FKA Swagger. If you need

555 Jan 02, 2023
An open source utility for creating publication quality LaTex figures generated from OpenFOAM data files.

foamTEX An open source utility for creating publication quality LaTex figures generated from OpenFOAM data files. Explore the docs » Report Bug · Requ

1 Dec 19, 2021
Python document object mapper (load python object from JSON and vice-versa)

lupin is a Python JSON object mapper lupin is meant to help in serializing python objects to JSON and unserializing JSON data to python objects. Insta

Aurélien Amilin 24 Nov 09, 2022
A plugin to introduce a generic API for Decompiler support in GEF

decomp2gef A plugin to introduce a generic API for Decompiler support in GEF. Like GEF, the plugin is battery-included and requires no external depend

Zion 379 Jan 08, 2023
Python Deep Dive Course - Accompanying Materials

Python Deep Dive Various Jupyter notebooks and Python sources associated with my Udemy Python 3 Deep Dive course series: Part 1: Mainly functional pro

Fred Baptiste 1.1k Dec 30, 2022