An interactive dashboard for visualisation, integration and classification of data using Active Learning.

Overview

Build Status codecov Documentation Status

DOI

AstronomicAL

An interactive dashboard for visualisation, integration and classification of data using Active Learning.

AstronomicAL is a human-in-the-loop interactive labelling and training dashboard that allows users to create reliable datasets and robust classifiers using active learning. The system enables users to visualise and integrate data from different sources and deal with incorrect or missing labels and imbalanced class sizes by using active learning to help the user focus on correcting the labels of a few key examples. Combining the use of the Panel, Bokeh, modAL and SciKit Learn packages, AstronomicAL enables researchers to take full advantage of the benefits of active learning: high accuracy models using just a fraction of the total data, without the requirement of being well versed in underlying libraries.

Load Configuration

Statement of Need

Active learning (Settles, 2012) removes the requirement for large amounts of labelled training data whilst still producing high accuracy models. This is extremely important as with ever-growing datasets; it is becoming impossible to manually inspect and verify ground truth used to train machine learning systems. The reliability of the training data limits the performance of any supervised learning model, so consistent classifications become more problematic as data sizes increase. The problem is exacerbated when a dataset does not contain any labelled data, preventing supervised learning techniques entirely. AstronomicAL has been developed to tackle these issues head-on and provide a solution for any large scientific dataset.

It is common for active learning to query areas of high uncertainty; these are often in the boundaries between classes where the expert's knowledge is required. To facilitate this human-in-the-loop process, AstronomicAL provides users with the functionality to fully explore each data point chosen. This allows them to inject their domain expertise directly into the training process, ensuring that asigned labels are both accurate and reliable.

AstronomicAL has been extensively validated on astronomy datasets. These are highly representative of the issues that we anticipate will be found in other domains for which the tool is designed to be easily customisable. Such issues include the volume of data (millions of sources per survey), vastly imbalanced classes and ambiguous class definitions leading to inconsistent labelling. AstronomicAL has been developed to be sufficiently general for any tabular data and can be customised for any domain. For example, we provide the functionality for data fusion of catalogued data and online cutout services for astronomical datasets.

Using its modular and extensible design, researchers can quickly adapt AstronomicAL for their research to allow for domain-specific plots, novel query strategies, and improved models. Furthermore, there is no requirement to be well-versed in the underlying libraries that the software uses. This is due to large parts of the complexity being abstracted whilst allowing more experienced users to access full customisability.

As the software runs entirely locally on the user's system, AstronomicAL provides a private space to experiment whilst providing a public mechanism to share results. By sharing only the configuration file, users remain in charge of distributing their potentially sensitive data, enabling collaboration whilst respecting privacy.

Documentation

The documentation for AstronomicAL can be found here.

Installation

To install AstronomicAL and its dependencies, the user can clone the repository and from within the repo folder run pip install -r requirements.txt. . It is recommended that the user creates a virtual environment using tools such as Virtualenv or Conda, to prevent any conflicting package versions.

    git clone https://github.com/grant-m-s/AstronomicAL.git
    cd AstronomicAL
    conda config --add channels conda-forge
    conda create --name astronomical --file requirements.txt
    conda activate astronomical

Quickstart Instructions

To begin using the software, run bokeh serve astronomicAL --show and your browser should automatically open to localhost:5006/astronomicAL

AstronomicAL provides both an example dataset and an example configuration file to allow you to jump right into the software and give it a test run.

Load Configuration

To begin training you simply have to select Load Custom Configuration checkbox and select your config file. Here we have chosen to use the example_config.json file.

The Load Config Select option allows use to choose the extent to which to reload the configuration.

Contributing to AstronomicAL

Reporting Bugs

If you encounter a bug, you can directly report it in the issues section.

Please describe how to reproduce the bug and include as much information as possible that can be helpful for fixing it.

Are you able to fix a bug?

You can open a new pull request or include your suggested fix in the issue.

Submission of extensions

Have you created an extension that you want to share with the community?

Create a pull request describing your extension and how it can improve research for others.

Support and Feedback

We would love to hear your thoughts on AstronomicAL.

Are there any features that would improve the effectiveness and usability of AstronomicAL? Let us know!

Any feedback can be submitted as an issue.

Referencing the Package

Please remember to cite our software and user guide whenever relevant.

See the Citing page in the documentation for instructions about referencing and citing the astronomicAL software.

Comments
  • Parameter errors when attempting to set up a training set

    Parameter errors when attempting to set up a training set

    Hi there,

    I have been attempting create a training set of color magnitude diagrams with astronomicAL and after a few iterations of importing data as both .csv and .fits files I keep running into the following errors and was wondering if these are specifically related to how I am using astronomicAL. I have created a column with integer labels 0-5 corresponding to galaxy shapes and have been using these as my Label and ID columns respectively. Any assistance or guidance would be appreciated and thank you for your help!

    Best, Max

    Error when launching astronomicAL (due to renamed class in newer versions of pandas?) pandas could not register all extension types imports failed with the following error: cannot import name 'ABCIndexClass' from 'pandas.core.dtypes.generic

    Errors when I attempt to view a basic plot after setting my parameters raise ValueError("%s not in parameter%s's list of possible objects, " ValueError: absMag_u_tot-absMag_g_tot not in parameter Y_variable's list of possible objects, valid options include [id, ebv, r_aper, absMag_u_tot, err_u, absMag_g_tot, err_g, absMag_r_tot, err_r, absMag_i_tot, err_i, absMag_z_tot, err_z, isTyphon, col1_1, col2_1, col3_1, col4_1, Official_name, Old_name, VCC_name, VCC_membership, VCC_Bmag, TH_name, sep_2, ...]

    /Users/mkurzner/opt/anaconda3/lib/python3.9/site-packages/bokeh/server/protocol_handler.py:94: RuntimeWarning: coroutine 'WSHandler.send_message' was never awaited work = connection.error(message, repr(e))

    opened by mkurzner 9
  • [JOSS Review] Tests

    [JOSS Review] Tests

    This issue is part of the JOSS review going on in openjournals/joss-reviews#3635.

    The tests/ directory contains 122 tests, but lacks organization. Suggested changes are:

    1. Split the test file into several individual files such that each sub-module has its own test file with just a few tests. This change is optional since JOSS doesn't require a specific structure to the test suite, but it may help users.
    2. When I ran the tests, 32 failed. I suggest looking into the failures or skipping them at runtime with a provided reason. The test summary from my run is attached below. To verify that all the functionalities of the software are working properly, I would like to get to the bottom of why the tests don't pass.
    FAILED all_test.py::TestSettings::test_data_selection_check_config_load_level_no_load_config - FileNotFoundError: [Errno 2]...
    FAILED all_test.py::TestDashboards::test_dashboard_closing_button_check_contents[Settings] - ValueError: PNG pane cannot pa...
    FAILED all_test.py::TestDashboards::test_selected_source_init_selected - _pickle.PicklingError: Can't pickle <function Colu...
    FAILED all_test.py::TestDashboards::test_selected_source_check_history_from_none_to_selected - _pickle.PicklingError: Can't...
    FAILED all_test.py::TestDashboards::test_selected_source_check_history_from_selected_to_selected_unique - _pickle.PicklingE...
    FAILED all_test.py::TestDashboards::test_selected_source_check_history_from_selected_to_selected_same_id_head - _pickle.Pic...
    FAILED all_test.py::TestDashboards::test_selected_source_check_history_from_selected_to_selected_no_id - _pickle.PicklingEr...
    FAILED all_test.py::TestDashboards::test_selected_source_check_history_from_selected_to_selected_same_id_throughout - _pick...
    FAILED all_test.py::TestDashboards::test_selected_source_empty_selected_from_selected - _pickle.PicklingError: Can't pickle...
    FAILED all_test.py::TestDashboards::test_selected_source_check_valid_select_from_selected_is_valid - _pickle.PicklingError:...
    FAILED all_test.py::TestDashboards::test_selected_source_search_valid_id - _pickle.PicklingError: Can't pickle <function Co...
    FAILED all_test.py::TestDashboards::test_selected_source_search_deselect - _pickle.PicklingError: Can't pickle <function Co...
    FAILED all_test.py::TestDashboards::test_selected_source_update_default_images - _pickle.PicklingError: Can't pickle <funct...
    FAILED all_test.py::TestDashboards::test_selected_source_update_custom_images - _pickle.PicklingError: Can't pickle <functi...
    FAILED all_test.py::TestDashboards::test_settings_dashboard_init - ValueError: PNG pane cannot parse string that is not a f...
    FAILED all_test.py::TestDashboards::test_settings_dashboard_close_settings - ValueError: PNG pane cannot parse string that ...
    FAILED all_test.py::TestDashboards::test_settings_dashboard_pipeline_previous - ValueError: PNG pane cannot parse string th...
    FAILED all_test.py::TestDashboards::test_settings_dashboard_pipeline_next - ValueError: PNG pane cannot parse string that i...
    FAILED all_test.py::TestDashboards::test_settings_dashboard_close_button_check_disabled - ValueError: PNG pane cannot parse...
    FAILED all_test.py::TestDashboards::test_settings_dashboard_get_settings - ValueError: PNG pane cannot parse string that is...
    FAILED all_test.py::TestDashboards::test_AL_dashboard_labels_to_train_single - ValueError: No plotting extension is current...
    FAILED all_test.py::TestDashboards::test_AL_dashboard_labels_to_train_multiple - ValueError: No plotting extension is curre...
    FAILED all_test.py::TestDashboards::test_labelling_dashboard_init - ValueError: No plotting extension is currently loaded. ...
    FAILED all_test.py::TestDashboards::test_labelling_dashboard_check_single_matching_value_in_region - ValueError: No plottin...
    FAILED all_test.py::TestDashboards::test_labelling_dashboard_check_multiple_matching_values_in_region - ValueError: No plot...
    FAILED all_test.py::TestDashboards::test_labelling_dashboard_check_combined_matching_values_in_region - ValueError: No plot...
    FAILED all_test.py::TestDashboards::test_labelling_dashboard_removing_sample_criteria - ValueError: No plotting extension i...
    FAILED all_test.py::TestDashboards::test_labelling_dashboard_check_looping_through_labelled - ValueError: No plotting exten...
    FAILED all_test.py::TestDashboards::test_labelling_dashboard_check_new_button_with_single_match - ValueError: No plotting e...
    FAILED all_test.py::TestDashboards::test_labelling_dashboard_check_zero_matching - ValueError: No plotting extension is cur...
    FAILED all_test.py::TestDashboards::test_labelling_dashboard_save_assigned_label - ValueError: No plotting extension is cur...
    FAILED all_test.py::TestUtils::test_save_config_save_config_file - ValueError: PNG pane cannot parse string that is not a f...
    ======================================== 32 failed, 90 passed, 19 warnings in 16.04s =========================================
    
    opened by rmorgan10 9
  • [JOSS Review] Installation

    [JOSS Review] Installation

    This issue is part of the JOSS review going on in https://github.com/openjournals/joss-reviews/issues/3635.

    At present, the installation instructions are inaccurate. After cloning, users need to enter the command cd astronomicAL such that the environment creation command can access the requirements.txt file. Adding this command to the instructions will be sufficient to satisfy the JOSS installation specifications.

    That being said, I think the better way to go about installation would be to make your package available on PyPI. The motivation here is to enable users to be able to use astronomicAL in any directory they are working in as opposed to having to move their work to the location of the cloned repository. This change is optional, but could make astronomicAL much more portable.

    opened by rmorgan10 8
  • [JOSS Review] Paper

    [JOSS Review] Paper

    This issue is part of the JOSS review going on in https://github.com/openjournals/joss-reviews/issues/3635.

    The paper is well written and does an excellent job of introducing astronomicAL. I think two aspects of the paper can be improved.

    1. The state of the field discussion seems to be missing. Perhaps some examples where active learning has been applied would be helpful here.

    2. The statement of need section seems to list the features of astronomicAL as opposed to demonstrating why it is a needed piece of software. I'm aware that there is significant need for a tool like this, but I think that fact could be communicated better in the paper. For instance, when you add a more descriptive state of the field, that discussion will lend itself nicely to introducing the reason the field needs astronimcAL.

    opened by rmorgan10 5
  • How to use this for Image Labelling?

    How to use this for Image Labelling?

    Hi guys!

    I was following the README.md file but couldn't quite figure out whether this could be used for labelling images and subsequntly building image classification models.

    Any help would be appreciated. Thanks! :)

    Regards, Vinayak.

    enhancement question 
    opened by ElisonSherton 2
  • [JOSS Review] General Documentaion

    [JOSS Review] General Documentaion

    This issue is part of the JOSS review going on in openjournals/joss-reviews#3635.

    The methods are sufficiently documented given that users interact with astronomicAL through the interactive browser as opposed to directly with the API. The tutorials are very helpful in this regard.

    Just one thing is missing to satisfy JOSS requirements:

    1. Community guidelines for how people can contribute, report issues, and seek support. Generally, just a short paragraph in the README will satisfy this requirement.
    opened by rmorgan10 2
  • Documentation & Fixes

    Documentation & Fixes

    Majority of Documentation Tutorials have now been updated.

    Small fixes:

    • ensure ml data is shared across all classifiers
    • reassign index after applying scaling
    opened by grant-m-s 1
  • Code Changes From User Feedback

    Code Changes From User Feedback

    1.Added SED plots for astronomy classification

    • User can load in a photometry band file containing mean wavelength, full width half maximum (FWHM) and magnitude error values. These can be assigned specific values or assigned column names where the corresponding value will be retrieved when plotted.
    • Users can create these files through AstronomicAL and assign which features shouuld be used for the SED plot. By default all features are included in the file but are assigned -99 (indicating they shouldn't be plotted).
    • data/sed_data/example_photometry_bands.json added to work with example dataset and config.
    1. When exporting a configuration, the classifiers will export raw labels from the user rather than one-vs-rest converted ones.
    • The conversion back to one-vs-rest happens on loading the configuration file.
    • The models themselves still see one-vs-rest labels (1 if assigned class, 0 otherwise).
    1. Removed unknowns (label = -1) from performance metrics and initial starting point selection.
    • The logic initially was to set all labels that were -1 to 0 for performance metrics and starting points for each classifier but due to (2) this prevented the conversion back to original values as it wasn't possible to know whether the point was in the initial starting points or not.
    1. Show/hide incorrect and correct points in train and validation performance plots.

    2. Export to Fits file button added

    • Allowing for assigned labels in all classifiers as well as labelling mode to be saved as Fits file containing the id and assigned label of that point.
    1. Removed many print statements

    2. Various bug fixes

    opened by grant-m-s 1
  • Images as separate process

    Images as separate process

    Using multiprocessing.Process, images are now loaded on a separate process to prevent lag on the rest of the system.

    Test also updated to allow for change.

    opened by grant-m-s 1
  • Interactive labelling

    Interactive labelling

    Split into two separate modes: Active Learning Mode (same as original) and Labelling Mode (used to created hand labelled test sets)

    Also includes:

    • View test set results
    • Added default dataset and config_file
    • updated documentation
    • update testing
    opened by grant-m-s 1
  • Config collaboration

    Config collaboration

    Users can now load configuration files for layouts, settings or full training reproducibility.

    Also change include:

    • Autosaving of training labels to allow for easy restart of project.
    • Unnecessary imports removed.
    • Increased Tests in settings.
    • Improved default layout
    • Added checks for optical and radio images to prevent crashes due to website outages.
    opened by grant-m-s 1
  • AstronomicAL V2: Deep Learning and Image Update

    AstronomicAL V2: Deep Learning and Image Update

    Upcoming features:

    Machine Learning Updates

    • [ ] Full Pytorch Model Support

      • [ ] Training
      • [ ] Save/Load Model
    • [ ] Full Tensorflow Model Support

      • [ ] Training
      • [ ] Save/Load Model
    • [ ] Image Dataset Incorporation

    • [ ] Model Exploration Views

      • [ ] Saliency Maps
      • [ ] Weight Visualisation
      • [ ] Occlusion Maps
    • [ ] Incorporate Stopping Criteria into training

    • [ ] Allow cross-val to produce uncertainties during training

    UI/UX Improvements

    • [ ] Improve layout template
    • [ ] Allow dynamic dashboard creation and removal

    Various Bug Fixes

    • [ ] Improve handling train, val, test splits for small and very imbalanced datasets #16
    enhancement 
    opened by grant-m-s 0
Releases(v1.0)
Owner
Interactive AI PhD student from the University of Bristol
Import, visualize, and analyze SpiderFoot OSINT data in Neo4j, a graph database

SpiderFoot Neo4j Tools Import, visualize, and analyze SpiderFoot OSINT data in Neo4j, a graph database Step 1: Installation NOTE: This installs the sf

Black Lantern Security 42 Dec 26, 2022
A little logger for machine learning research

Blinker Blinker provides a fast dispatching system that allows any number of interested parties to subscribe to events, or "signals". Signal receivers

Reinforcement Learning Working Group 27 Dec 03, 2022
eoplatform is a Python package that aims to simplify Remote Sensing Earth Observation by providing actionable information on a wide swath of RS platforms and provide a simple API for downloading and visualizing RS imagery

An Earth Observation Platform Earth Observation made easy. Report Bug | Request Feature About eoplatform is a Python package that aims to simplify Rem

Matthew Tralka 4 Aug 11, 2022
Matplotlib JOTA style for making figures

Matplotlib JOTA style for making figures This repo has Matplotlib JOTA style to format plots and figures for publications and presentation.

JOTA JORNALISMO 2 May 05, 2022
股票行情实时数据接口-A股,完全免费的沪深证券股票数据-中国股市,python最简封装的API接口

股票行情实时数据接口-A股,完全免费的沪深证券股票数据-中国股市,python最简封装的API接口,包含日线,历史K线,分时线,分钟线,全部实时采集,系统包括新浪腾讯双数据核心采集获取,自动故障切换,STOCK数据格式成DataFrame格式,可用来查询研究量化分析,股票程序自动化交易系统.为量化研究者在数据获取方面极大地减轻工作量,更加专注于策略和模型的研究与实现。

dev 572 Jan 08, 2023
Python implementation of the Density Line Chart by Moritz & Fisher.

PyDLC - Density Line Charts with Python Python implementation of the Density Line Chart (Moritz & Fisher, 2018) to visualize large collections of time

Charles L. Bérubé 10 Jan 06, 2023
a robust room presence solution for home automation with nearly no false negatives

Argos Room Presence This project builds a room presence solution on top of Argos. Using just a cheap raspberry pi zero w (plus an attached pi camera,

Angad Singh 46 Sep 18, 2022
PanGraphViewer -- show panenome graph in an easy way

PanGraphViewer -- show panenome graph in an easy way Table of Contents Versions and dependences Desktop-based panGraphViewer Library installation for

16 Dec 17, 2022
Render Jupyter notebook in the terminal

jut - JUpyter notebook Terminal viewer. The command line tool view the IPython/Jupyter notebook in the terminal. Install pip install jut Usage $jut --

Kracekumar 169 Dec 27, 2022
Some problems of SSLC ( High School ) before outputs and after outputs

Some problems of SSLC ( High School ) before outputs and after outputs 1] A Python program and its output (output1) while running the program is given

Fayas Noushad 3 Dec 01, 2021
BrowZen correlates your emotional states with the web sites you visit to give you actionable insights about how you spend your time browsing the web.

BrowZen BrowZen correlates your emotional states with the web sites you visit to give you actionable insights about how you spend your time browsing t

Nick Bild 36 Sep 28, 2022
Python script for writing text on github contribution chart.

Github Contribution Drawer Python script for writing text on github contribution chart. Requirements Python 3.X Getting Started Create repository Put

Steven 0 May 27, 2022
Movie recommendation using RASA, TigerGraph

Demo run: The below video will highlight the runtime of this setup and some sample real-time conversations using the power of RASA + TigerGraph, Steps

Sudha Vijayakumar 3 Sep 10, 2022
Mathematical learnings with Lean, for those of us who wish we knew more of both!

Lean for the Inept Mathematician This repository contains source files for a number of articles or posts aimed at explaining bite-sized mathematical c

Julian Berman 8 Feb 14, 2022
Generate a roam research like Network Graph view from your Notion pages.

Notion Graph View Export Notion pages to a Roam Research like graph view.

Steve Sun 214 Jan 07, 2023
Visualize tensors in a plain Python REPL using Sparklines

Visualize tensors in a plain Python REPL using Sparklines

Shawn Presser 43 Sep 03, 2022
A set of three functions, useful in geographical calculations of different sorts

GreatCircle A set of three functions, useful in geographical calculations of different sorts. Available for PHP, Python, Javascript and Ruby. Live dem

72 Sep 30, 2022
A little word cloud generator in Python

Linux macOS Windows PyPI word_cloud A little word cloud generator in Python. Read more about it on the blog post or the website. The code is tested ag

Andreas Mueller 9.2k Dec 30, 2022
Tweets your monthly GitHub Contributions as Wordle grid

Tweets your monthly GitHub Contributions as Wordle grid

Venu Vardhan Reddy Tekula 5 Feb 16, 2022
Visualization of the World Religion Data dataset by Correlates of War Project.

World Religion Data Visualization Visualization of the World Religion Data dataset by Correlates of War Project. Mostly personal project to famirializ

Emile Bangma 1 Oct 15, 2022