Visual Automata is a Python 3 library built as a wrapper for Caleb Evans' Automata library to add more visualization features.

Overview

Latest Version Supported Python versions Downloads

Visual Automata

Copyright 2021 Lewi Lie Uberg
Released under the MIT license

Visual Automata is a Python 3 library built as a wrapper for Caleb Evans' Automata library to add more visualization features.

Contents

Prerequisites

pip install automata-lib
pip install pandas
pip install graphviz
pip install colormath
pip install jupyterlab

Installing

pip install visual-automata

VisualDFA

Importing

Import needed classes.

from automata.fa.dfa import DFA

from visual_automata.fa.dfa import VisualDFA

Instantiating DFAs

Define an automata-lib DFA that can accept any string ending with 00 or 11.

dfa = VisualDFA(
    states={"q0", "q1", "q2", "q3", "q4"},
    input_symbols={"0", "1"},
    transitions={
        "q0": {"0": "q3", "1": "q1"},
        "q1": {"0": "q3", "1": "q2"},
        "q2": {"0": "q3", "1": "q2"},
        "q3": {"0": "q4", "1": "q1"},
        "q4": {"0": "q4", "1": "q1"},
    },
    initial_state="q0",
    final_states={"q2", "q4"},
)

Converting

An automata-lib DFA can be converted to a VisualDFA.

Define an automata-lib DFA that can accept any string ending with 00 or 11.

dfa = DFA(
    states={"q0", "q1", "q2", "q3", "q4"},
    input_symbols={"0", "1"},
    transitions={
        "q0": {"0": "q3", "1": "q1"},
        "q1": {"0": "q3", "1": "q2"},
        "q2": {"0": "q3", "1": "q2"},
        "q3": {"0": "q4", "1": "q1"},
        "q4": {"0": "q4", "1": "q1"},
    },
    initial_state="q0",
    final_states={"q2", "q4"},
)

Convert automata-lib DFA to VisualDFA.

dfa = VisualDFA(dfa)

Minimal-DFA

Creates a minimal DFA which accepts the same inputs as the old one. Unreachable states are removed and equivalent states are merged. States are renamed by default.

new_dfa = VisualDFA(
    states={'q0', 'q1', 'q2'},
    input_symbols={'0', '1'},
    transitions={
        'q0': {'0': 'q0', '1': 'q1'},
        'q1': {'0': 'q0', '1': 'q2'},
        'q2': {'0': 'q2', '1': 'q1'}
    },
    initial_state='q0',
    final_states={'q1'}
)
new_dfa.table
      0    1
→q0  q0  *q1
*q1  q0   q2
q2   q2  *q1
new_dfa.show_diagram()

alt text

minimal_dfa = VisualDFA.minify(new_dfa)
minimal_dfa.show_diagram()

alt text

minimal_dfa.table
                0        1
→{q0,q2}  {q0,q2}      *q1
*q1       {q0,q2}  {q0,q2}

Transition Table

Outputs the transition table for the given DFA.

dfa.table
       0    1
→q0   q3   q1
q1    q3  *q2
*q2   q3  *q2
q3   *q4   q1
*q4  *q4   q1

Check input strings

1001 does not end with 00 or 11, and is therefore Rejected

dfa.input_check("1001")
          [Rejected]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1

10011 does end with 11, and is therefore Accepted

dfa.input_check("10011")
          [Accepted]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1
5                 q1             1        *q2

Show Diagram

For IPython dfa.show_diagram() may be used.
For a python script dfa.show_diagram(view=True) may be used to automatically view the graph as a PDF file.

dfa.show_diagram()

alt text

The show_diagram method also accepts input strings, and will return a graph with gradient red arrows for Rejected results, and gradient green arrows for Accepted results. It will also display a table with transitions states stepwise. The steps in this table will correspond with the [number] over each traversed arrow.

Please note that for visual purposes additional arrows are added if a transition is traversed more than once.

dfa.show_diagram("1001")
          [Rejected]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1

alt text

dfa.show_diagram("10011")
          [Accepted]                         
Step: Current state: Input symbol: New state:
1                →q0             1         q1
2                 q1             0         q3
3                 q3             0        *q4
4                *q4             1         q1
5                 q1             1        *q2

alt text

Authors

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Acknowledgments

You might also like...
An open-source NLP research library, built on PyTorch.
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

An open-source NLP research library, built on PyTorch.
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

A deep learning-based translation library built on Huggingface transformers

DL Translate A deep learning-based translation library built on Huggingface transformers and Facebook's mBART-Large 💻 GitHub Repository 📚 Documentat

Natural Language Processing library built with AllenNLP 🌲🌱
Natural Language Processing library built with AllenNLP 🌲🌱

Custom Natural Language Processing with big and small models 🌲🌱

A Multilingual Latent Dirichlet Allocation (LDA) Pipeline with Stop Words Removal, n-gram features, and Inverse Stemming, in Python.

Multilingual Latent Dirichlet Allocation (LDA) Pipeline This project is for text clustering using the Latent Dirichlet Allocation (LDA) algorithm. It

This repository contains Python scripts for extracting linguistic features from Filipino texts.

Filipino Text Linguistic Feature Extractors This repository contains scripts for extracting linguistic features from Filipino texts. The scripts were

A pytorch implementation of the ACL2019 paper
A pytorch implementation of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".

RE2 This is a pytorch implementation of the ACL 2019 paper "Simple and Effective Text Matching with Richer Alignment Features". The original Tensorflo

Code for paper "Role-oriented Network Embedding Based on Adversarial Learning between Higher-order and Local Features"

Role-oriented Network Embedding Based on Adversarial Learning between Higher-order and Local Features Train python main.py --dataset brazil-flights C

An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.

Extracting OpenAI CLIP (Global/Grid) Features from Image and Text This repo aims at providing an easy to use and efficient code for extracting image &

Comments
  • FrozenNFA constructor attempts to call deepcopy on frozendicts

    FrozenNFA constructor attempts to call deepcopy on frozendicts

    The VisualNFA constructor attempts to create a deep copy of the passed nfa, especially the transitions dictionary: https://github.com/lewiuberg/visual-automata/blob/3ea0cdc4de9d3919250919b70fbc036d75120a85/visual_automata/fa/nfa.py#L469

    The deepcopy method is monkeypatched onto dict via curse: https://github.com/lewiuberg/visual-automata/blob/3ea0cdc4de9d3919250919b70fbc036d75120a85/visual_automata/fa/nfa.py#L32

    However, automata-lib 7.0.1 returns a frozendict from the frozendict package instead, so the method call fails. It is not clear if copying the frozendict is at all necessary; deepcopy returns the object as-is.

    MRE

    Using most recent versions:

    • automata-lib 7.0.1
    • visual_automata 1.1.1
    from automata.fa.nfa import NFA
    from visual_automata.fa.nfa import VisualNFA
    
    nfa = NFA(states={"q0"}, input_symbols={"i0"}, transitions={"q0": {"i0": {"q0"}}}, initial_state="q0",
              final_states={"q0"})
    VisualNFA(nfa).show_diagram(view=True)
    

    Expected Behavior

    The automaton is shown.

    Actual Behavior

    Traceback (most recent call last):
      File "/path/to/scratch_1.py", line 6, in <module>
        VisualNFA(nfa).show_diagram(view=True)
      File "/path/to/site-packages/visual_automata/fa/nfa.py", line 619, in show_diagram
        all_transitions_pairs = self._transitions_pairs(self.nfa.transitions)
      File "/path/to/site-packages/visual_automata/fa/nfa.py", line 469, in _transitions_pairs
        all_transitions = all_transitions.deepcopy()
    AttributeError: 'frozendict.frozendict' object has no attribute 'deepcopy'
    
    opened by no-preserve-root 3
  • VisualDFA constructor implicitly checks wrapped automaton cardinality

    VisualDFA constructor implicitly checks wrapped automaton cardinality

    The VisualDFA constructor checks the dfa parameter using https://github.com/lewiuberg/visual-automata/blob/3ea0cdc4de9d3919250919b70fbc036d75120a85/visual_automata/fa/dfa.py#L34

    This checks if dfa is truthy. Since the DFA class defines a __len__ method (and no __bool__), is is truthy iff len(dfa) != 0. Unfortunately, the length checks the dfa's cardinality, i.e., the size if the input language. For infinite-language DFAs, an exception is then raised. As a result, infinite DFAs cannot be visualized.

    This could be fixed by testing if dfa is None. VisualNFA is not affected since NFA does not define a __len__ method at the moment, but would fail if a similar method would be added to NFA.

    MRE

    Using most recent versions:

    • automata-lib 7.0.1
    • visual_automata 1.1.1
    from automata.fa.dfa import DFA
    from visual_automata.fa.dfa import VisualDFA
    
    dfa = DFA(states={"q0"}, input_symbols={"i0"}, transitions={"q0": {"i0": "q0"}}, initial_state="q0",
              final_states={"q0"})
    VisualDFA(dfa).show_diagram(view=True)
    

    Expected Behavior

    The automaton is shown.

    Actual Behavior

    Traceback (most recent call last):
      File "/path/to/scratch_1.py", line 6, in <module>
        VisualDFA(dfa).show_diagram(view=True)
      File "/path/to/site-packages/visual_automata/fa/dfa.py", line 34, in __init__
        if dfa:
      File "/path/to/site-packages/automata/fa/dfa.py", line 160, in __len__
        return self.cardinality()
      File "/path/to/site-packages/automata/fa/dfa.py", line 792, in cardinality
        raise exceptions.InfiniteLanguageException("The language represented by the DFA is infinite.")
    automata.base.exceptions.InfiniteLanguageException: The language represented by the DFA is infinite.
    

    Workaround

    Manually copying the automaton works:

    VisualDFA(states=dfa.states, input_symbols=dfa.input_symbols, transitions=dfa.transitions,
              initial_state=dfa.initial_state, final_states=dfa.final_states).show_diagram(view=True)
    
    opened by no-preserve-root 1
Releases(1093bea)
Owner
Lewi Uberg
Lewi Uberg
Knowledge Oriented Programming Language

KoPL: 面向知识的推理问答编程语言 安装 | 快速开始 | 文档 KoPL全称 Knowledge oriented Programing Language, 是一个为复杂推理问答而设计的编程语言。我们可以将自然语言问题表示为由基本函数组合而成的KoPL程序,程序运行的结果就是问题的答案。目前,

THU-KEG 62 Dec 12, 2022
:hot_pepper: R²SQL: "Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing." (AAAI 2021)

R²SQL The PyTorch implementation of paper Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing. (AAAI 2021) Requirement

huybery 60 Dec 31, 2022
DeepSpeech - Easy-to-use Speech Toolkit including SOTA ASR pipeline, influential TTS with text frontend and End-to-End Speech Simultaneous Translation.

(简体中文|English) Quick Start | Documents | Models List PaddleSpeech is an open-source toolkit on PaddlePaddle platform for a variety of critical tasks i

5.6k Jan 03, 2023
Azure Text-to-speech service for Home Assistant

Azure Text-to-speech service for Home Assistant The Azure text-to-speech platform uses online Azure Text-to-Speech cognitive service to read a text wi

Yassine Selmi 2 Aug 06, 2022
Transformers and related deep network architectures are summarized and implemented here.

Transformers: from NLP to CV This is a practical introduction to Transformers from Natural Language Processing (NLP) to Computer Vision (CV) Introduct

Ibrahim Sobh 138 Dec 27, 2022
Creating a python chatbot that Starbucks users can text to place an order + help cut wait time of a normal coffee.

Creating a python chatbot that Starbucks users can text to place an order + help cut wait time of a normal coffee.

2 Jan 20, 2022
Transformer Based Korean Sentence Spacing Corrector

TKOrrector Transformer Based Korean Sentence Spacing Corrector License Summary This solution is made available under Apache 2 license. See the LICENSE

Paul Hyung Yuel Kim 3 Apr 18, 2022
DziriBERT: a Pre-trained Language Model for the Algerian Dialect

DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect.

117 Jan 07, 2023
API for the GPT-J language model 🦜. Including a FastAPI backend and a streamlit frontend

gpt-j-api 🦜 An API to interact with the GPT-J language model. You can use and test the model in two different ways: Streamlit web app at http://api.v

Víctor Gallego 276 Dec 31, 2022
RecipeReduce: Simplified Recipe Processing for Lazy Programmers

RecipeReduce This repo will help you figure out the amount of ingredients to buy for a certain number of meals with selected recipes. RecipeReduce Get

Qibin Chen 9 Apr 22, 2022
CrossNER: Evaluating Cross-Domain Named Entity Recognition (AAAI-2021)

CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains (Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specia

Zihan Liu 89 Nov 10, 2022
GAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

GAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training Code and model from our AAAI 2021 paper

Amazon Web Services - Labs 83 Jan 09, 2023
This is a simple item2vec implementation using gensim for recbole

recbole-item2vec-model This is a simple item2vec implementation using gensim for recbole( https://recbole.io ) Usage When you want to run experiment f

Yusuke Fukasawa 2 Oct 06, 2022
A python script to prefab your scripts/text files, and re create them with ease and not have to open your browser to copy code or write code yourself

Scriptfab - What is it? A python script to prefab your scripts/text files, and re create them with ease and not have to open your browser to copy code

DevNugget 3 Jul 28, 2021
IEEEXtreme15.0 Questions And Answers

IEEEXtreme15.0 Questions And Answers IEEEXtreme is a global challenge in which teams of IEEE Student members – advised and proctored by an IEEE member

Dilan Perera 15 Oct 24, 2022
Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/

Texar-PyTorch is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar

ASYML 726 Dec 30, 2022
Material for GW4SHM workshop, 16/03/2022.

GW4SHM Workshop Wednesday, 16th March 2022 (13:00 – 15:15 GMT): Presented by: Dr. Rhodri Nelson, Imperial College London Project website: https://www.

Devito Codes 1 Mar 16, 2022
An extension for asreview implements a version of the tf-idf feature extractor that saves the matrix and the vocabulary.

Extension - matrix and vocabulary extractor for TF-IDF and Doc2Vec An extension for ASReview that adds a tf-idf extractor that saves the matrix and th

ASReview 4 Jun 17, 2022
Chinese Grammatical Error Diagnosis

nlp-CGED Chinese Grammatical Error Diagnosis 中文语法纠错研究 基于序列标注的方法 所需环境 Python==3.6 tensorflow==1.14.0 keras==2.3.1 bert4keras==0.10.6 笔者使用了开源的bert4keras

12 Nov 25, 2022
Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)

Time-aware Large Kernel (TaLK) Convolutions (Lioutas et al., 2020) This repository contains the source code, pre-trained models, as well as instructio

Vasileios Lioutas 28 Dec 07, 2022