Lolviz - A simple Python data-structure visualization tool for lists of lists, lists, dictionaries; primarily for use in Jupyter notebooks / presentations

Related tags

Deep Learninglolviz
Overview

lolviz

By Terence Parr. See Explained.ai for more stuff.

A very nice looking javascript lolviz port with improvements by Adnan M.Sagar.

A simple Python data-structure visualization tool that started out as a List Of Lists (lol) visualizer but now handles arbitrary object graphs, including function call stacks! lolviz tries to look out for and format nicely common data structures such as lists, dictionaries, linked lists, and binary trees. This package is primarily for use in teaching and presentations with Jupyter notebooks, but could also be used for debugging data structures. Useful for devoting machine learning data structures, such as decision trees, as well.

It seems that I'm always trying to describe how data is laid out in memory to students. There are really great data structure visualization tools but I wanted something I could use directly via Python in Jupyter notebooks.

The look and idea was inspired by the awesome Python tutor. The graphviz/dot tool does all of the heavy lifting underneath for layout; my contribution is primarily making graphviz display objects in a nice way.

Functionality

There are currently a number of functions of interest that return graphviz.files.Source objects:

  • listviz(): Horizontal list visualization
  • lolviz(): List of lists visualization with the first list vertical and the nested lists horizontal.
  • treeviz(): Binary trees visualized top-down ala computer science.
  • objviz(): Generic object graph visualization that knows how to find lists of lists (like lolviz()) and linked lists. Trees are also displayed reasonably, but with left to right orientation instead of top-down (a limitation of graphviz). Here is an example linked list and dictionary:

  • callsviz(): Visualize the call stack and anything pointed to by globals, locals, or parameters. You can limit the variables displayed by passing in a list of varnames as an argument.
  • callviz(): Same as callsviz() but displays only the current function's frame or you can pass in a Python stack frame object to display.
  • matrixviz(data): Display numpy ndarray; only 1D and 2D at moment.
  • strviz(): Show a string like an array.

Given the return value in generic Python, simply call method view() on the returned object to display the visualization. From jupyter, call function IPython.display.display() with the returned object as an argument. Function arguments are in italics.

Check out the examples.

Installation

First you need graphviz (more specifically the dot executable). On a mac it's easy:

$ brew install graphviz

Then just install the lolviz Python package:

$ pip install lolviz

or upgrade to the latest version:

$ pip install -U lolviz

Usage

From within generic Python, you can get a window to pop up using the view() method:

from lolviz import *
data = ['hi','mom',{3,4},{"parrt":"user"}]
g = listviz(data)
print(g.source) # if you want to see the graphviz source
g.view() # render and show graphviz.files.Source object

From within Jupyter notebooks you can avoid the render() call because Jupyter knows how to display graphviz.files.Source objects:

For more examples that you can cut-and-paste, please see the jupyter notebook full of examples.

Preferences

There are global preferences you can set that affect the display for long values:

  • prefs.max_str_len (Default 20). How many chars in a string representation of a value before we abbreviate with .... E.g.,:
  • prefs.max_horiz_array_len (Default 70) Lists can quickly become too wide and distort the visualization. This preference lets you set how long the combined string representations of the list values can get before we use a vertical representation of the list. E.g.,:
  • prefs.max_list_elems. Horizontal and vertical lists and sets show maximum of 10 (default) elements.
  • prefs.float_precision. How many decimal places to show for floats (default is 5).

Implementation notes

Mostly notes for parrt to remember things.

Graphviz

  • Ugh. shape=record means html-labels can't use ports. warning!

  • warning: <td> and </td> must be on same line or row is super wide!

Deploy

$ python setup.py sdist upload 

Or to install locally

$ cd ~/github/lolviz
$ pip install .
Comments
  • name 'unicode' is not defined with Python 3.6

    name 'unicode' is not defined with Python 3.6

    Hi, thanks for this great python package.

    I noticed the follow error when I do this in jupyter on python-3.6.2

    import lolviz
    lolviz.objviz('1234567890')
    
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-4-7a8f0cba785c> in <module>()
          1 import lolviz
    ----> 2 lolviz.objviz('1234567890')
    
    /usr/lib/python3.6/site-packages/lolviz.py in objviz(o, orientation)
        217 """ % orientation
        218     reachable = closure(o)
    --> 219     s += obj_nodes(reachable)
        220     s += obj_edges(reachable)
        221     s += "}\n"
    
    /usr/lib/python3.6/site-packages/lolviz.py in obj_nodes(nodes)
        229     # currently only making subgraph cluster for linked lists
        230     # otherwise it squishes trees.
    --> 231     max_edges_for_type,subgraphs = connected_subgraphs(nodes)
        232     c = 1
        233     for g in subgraphs:
    
    /usr/lib/python3.6/site-packages/lolviz.py in connected_subgraphs(reachable, varnames)
        785     of sets containing the id()s of all nodes in a specific subgraph
        786     """
    --> 787     max_edges_for_type = max_edges_in_connected_subgraphs(reachable, varnames)
        788 
        789     reachable = closure(reachable, varnames)
    
    /usr/lib/python3.6/site-packages/lolviz.py in max_edges_in_connected_subgraphs(reachable, varnames)
        839     """
        840     max_edges_for_type = defaultdict(int)
    --> 841     reachable = closure(reachable, varnames)
        842     reachable = [p for p in reachable if isplainobj(p)]
        843     for p in reachable:
    
    /usr/lib/python3.6/site-packages/lolviz.py in closure(p, varnames)
        706     from but don't include frame objects.
        707     """
    --> 708     return closure_(p, varnames, set())
        709 
        710 
    
    /usr/lib/python3.6/site-packages/lolviz.py in closure_(p, varnames, visited)
        710 
        711 def closure_(p, varnames, visited):
    --> 712     if p is None or isatom(p):
        713         return []
        714     if id(p) in visited:
    
    /usr/lib/python3.6/site-packages/lolviz.py in isatom(p)
        691 
        692 
    --> 693 def isatom(p): return type(p) == int or type(p) == float or type(p) == str or type(p) == unicode
        694 
        695 
    
    NameError: name 'unicode' is not defined
    
    py2py3 compatibility 
    opened by faultylee 5
  • Hidden values of nested table

    Hidden values of nested table

    I find table in lolviz can not show values if cell in one row is a list, as:

    
    T = [
        ['11','12','13','14',['a','b','c'],'16']
    ]
    objviz(T)
    

    Only the a, b, c are shown and 11, 12, 13, 14, and 16 are not shown, is this configurable? Thanks!

    question 
    opened by pytkr 2
  • Invalid syntax on Python 3.6 and lolviz 1.2.1

    Invalid syntax on Python 3.6 and lolviz 1.2.1

    Contd. from https://github.com/parrt/lolviz/issues/11#issuecomment-326474125

    Traceback (most recent call last):
    
      File "/Users/srid/code/ipython/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2862, in run_code
        exec(code_obj, self.user_global_ns, self.user_ns)
    
      File "<ipython-input-1-ae470ca34f62>", line 1, in <module>
        from lolviz import *
    
      File "/Users/srid/code/ipython/lib/python3.6/site-packages/lolviz.py", line 442
        print "hashcode =", hashcode(key)
                         ^
    SyntaxError: invalid syntax
    
    
    py2py3 compatibility 
    opened by srid 1
  • Can't install on Python 3

    Can't install on Python 3

    This package is currently registered as Python 2.7 only in PyPI. This is what happens if you try to install it on Python 3.6:

    (venv) $ pip install lolviz
    Collecting lolviz
      Using cached lolviz-1.2.tar.gz
    lolviz requires Python '<3' but the running Python is 3.6.0
    

    I think it's just a matter of adding the Python 3 classifier in setup.py, because as far as I can see the code is fine for Python 3.

    duplicate 
    opened by miguelgrinberg 1
  • Dictionaries with tuple values are rendered incorrectly

    Dictionaries with tuple values are rendered incorrectly

    from lolviz import *
    
    dict_tuple_values = dictviz({'a': (1, 2)})
    dict_tuple_values.render(view=True, cleanup=True)
    

    Is rendered as: image

    This only happens with 2-tuples. Any other tuple length is rendered as expected.

    This is the offending line: https://github.com/parrt/lolviz/blob/a6fc29b008a16993738416e793de71c3bff4175d/lolviz.py#L159

    What is the significance of ... and len(el) == 2 ?

    enhancement 
    opened by DeepSpace2 1
  • implement multi child tree

    implement multi child tree

    Hi, I was looking for a package to visualize Tree Search Algorithm Recently and I found this repository and really liked it. But for tree visualization, the treeviz function can only support binary tree with child name left and right.

    So I made some modification to support multi children and specifying child name. I add 2 new parameters for treeviz(), childfields and show_all_children. The variable name in childfields will be recognized as child node. And if show_all_children=False, it will only visualize the child names exist, else it will show all the names in childfieds.

    I know you may be busy and this repository hasn't been updated for a long time. You can check these modification anytime free. I am so glad to receive any suggestions from you.

    enhancement 
    opened by sunyiwei24601 5
  • Could we add a

    Could we add a "super" display function that chooses the best one based on the datatype?

    Hello @parrt! I used your lolviz project a few years ago, and I rediscovered it today. It's awesome!

    Could we add a "super" display function that chooses the best one based on the datatype?

    When reading the documentation, it shows like 8 different functions, and I don't want to spend my time thinking about the name of one or another function for one or another datatype. What is described in this documentation is almost trivially translated to Python code.

    modes = [ "str", "matrix", "call", "calls", "obj", "tree", "lol", "list" ]
    
    def unified_lolviz(obj, mode=None):
        """ Unified function to display `obj` with lolviz, in Jupyter notebook only."""
        if mode == "str" or isinstance(obj):
            return strviz(obj)
        if mode == "matrix" or "<class 'numpy.ndarray'>" == str(type(obj)):
            # can't use isinstance(obj, np.ndarray) without import numpy!
            return matrixviz(obj)
        if mode == "call": return callviz()
        if mode == "calls": return callviz()
        if mode == "lol" or isinstance(obj, list) and obj and isinstance(obj[0], list):
            # obj is a list, is non empty, and obj[0] is a list!
            return lolviz(obj)
        if mode == "list" or isinstance(obj, list):
            return listviz(obj)
        return objviz(obj)  # default
    

    So I'm opening this ticket: if you think this could be added to the library, can we discuss it here, and then I can take care of writing it, testing it, sending a pull-request, and you can merge, and then update on Pypi! What do you think?

    Regards from France, @Naereen

    enhancement 
    opened by Naereen 11
  • Create typed Class-Structure Diagram

    Create typed Class-Structure Diagram

    This isn't a bug, but a question for help / hints.

    I would like to use lolviz to create a graph of my python class structure. Every class-attribute is type annotated, so it should be possible to like their interactions without creating class instances. Here is a minimal example:

    from copy import deepcopy
    from lolviz import *
    class Workout:
        def __init__(
            self,
            date: dt.date,
            name: str = "",
            duration: int = 0
        ):
            # assert isinstance(tss, (np.number, int))
            self.date: dt.date = date
            self.name: str = name
            self.duration: int = duration  # in seconds
            self.done: boolean = False
    
    class Athlete:
        def __init__(
            self,
            name: str,
            birthday: dt.date,
            sports: List[str]
        ):
            self.name: str = name
            self.birthday: dt.date = birthday
            self.sports: List[str] = sports
    
    
    class DataContainer:
        def __init__(self, 
                     athlete: Athlete, 
                     tasks: List[Workout] = [], 
                     fulfilled: List[Workout] = []):
            self.athelete: Athlete = athlete
            self.tasks: List[Workout] = [w for w in tasks if isinstance(w, Workout)]
            self.fulfilled: List[Workout] = [w for w in fulfilled if isinstance(w, Workout)]
    
    me = Athlete("nico", dt(1990,3,1).date(), sports=["running","climbing"])
    t1 = Workout(date=dt(2020,1,1).date(), name="5k Run")
    f1 = deepcopy(t1)
    f1.done = True
    dc1 = DataContainer(me, tasks=[t1], fulfilled=[f1])
    

    Now I can use objviz(dc1) to create the following diagram: Screenshot 2021-01-13 at 16 47 07

    What I actually would like to achieve is a command like classviz(DataContainer) which will give me a similar chart, but not with the actual attribute values but their types. For sure there will be other small changes, but that's the basic idea.

    What I already can do is something like:

    def get_types(annotated_class):
        return (annotated_class.__name__, {k: v.__name__ for k,v in annotated_class.__init__.__annotations__.items()})
    
    get_types(Workout)
    

    which gives me something like: ('Workout', {'date': 'date', 'name': 'str', 'duration': 'int'}). How ever I don't find a proper way to create similar table-elements which contain Workout in the header and the name-attribute mapping in it's body.

    Can someone give me a hint, how to create such tables manually? I am also happy for any additional advices

    feature 
    opened by krlng 0
  • visualization fails when variables contain " chars">

    visualization fails when variables contain "<" and ">" chars

    from lolviz import objviz a={"hello":"<"} objviz(a).render() Error: Source.gv: syntax error in line 16 scanning a HTML string (missing '>'? bad nesting? longer than 16384?) String starting:<

    opened by ami-navon 1
    Releases(1.4)
    Owner
    Terence Parr
    Creator of the ANTLR parser generator. Professor at Univ of San Francisco, computer science and data science. Working mostly on machine learning stuff now.
    Terence Parr
    Code to reproduce the results for Compositional Attention

    Compositional-Attention This repository contains the official implementation for the paper Compositional Attention: Disentangling Search and Retrieval

    Sarthak Mittal 58 Nov 30, 2022
    Implementation of "Learning to Match Features with Seeded Graph Matching Network" ICCV2021

    SGMNet Implementation PyTorch implementation of SGMNet for ICCV'21 paper "Learning to Match Features with Seeded Graph Matching Network", by Hongkai C

    87 Dec 11, 2022
    This is a deep learning-based method to segment deep brain structures and a brain mask from T1 weighted MRI.

    DBSegment This tool generates 30 deep brain structures segmentation, as well as a brain mask from T1-Weighted MRI. The whole procedure should take ~1

    Luxembourg Neuroimaging (Platform OpNeuroImg) 2 Oct 25, 2022
    General Virtual Sketching Framework for Vector Line Art (SIGGRAPH 2021)

    General Virtual Sketching Framework for Vector Line Art - SIGGRAPH 2021 Paper | Project Page Outline Dependencies Testing with Trained Weights Trainin

    Haoran MO 118 Dec 27, 2022
    Official implementation of ETH-XGaze dataset baseline

    ETH-XGaze baseline Official implementation of ETH-XGaze dataset baseline. ETH-XGaze dataset ETH-XGaze dataset is a gaze estimation dataset consisting

    Xucong Zhang 134 Jan 03, 2023
    Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

    Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

    30 Dec 24, 2022
    Generalized Random Forests

    generalized random forests A pluggable package for forest-based statistical estimation and inference. GRF currently provides non-parametric methods fo

    GRF Labs 781 Dec 25, 2022
    The Official Repository for "Generalized OOD Detection: A Survey"

    Generalized Out-of-Distribution Detection: A Survey 1. Overview This repository is with our survey paper: Title: Generalized Out-of-Distribution Detec

    Jingkang Yang 338 Jan 03, 2023
    Unsupervised Representation Learning by Invariance Propagation

    Unsupervised Learning by Invariance Propagation This repository is the official implementation of Unsupervised Learning by Invariance Propagation. Pre

    FengWang 15 Jul 06, 2022
    Distributed Evolutionary Algorithms in Python

    DEAP DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data stru

    Distributed Evolutionary Algorithms in Python 4.9k Jan 05, 2023
    Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image Segmentation

    DistMIS Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image Segmentation. DistriMIS Distributing Deep Learning Hyperparameter Tuning

    HiEST 2 Sep 09, 2022
    an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch

    revisiting-sepconv This is a reference implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation [1] using PyTorch. Given two f

    Simon Niklaus 59 Dec 22, 2022
    Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN

    Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN If you use this code for your research, please cite ou

    41 Dec 08, 2022
    ULMFiT for Genomic Sequence Data

    Genomic ULMFiT This is an implementation of ULMFiT for genomics classification using Pytorch and Fastai. The model architecture used is based on the A

    Karl 276 Dec 12, 2022
    Pytorch implementation AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks

    AttnGAN Pytorch implementation for reproducing AttnGAN results in the paper AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative

    Tao Xu 1.2k Dec 26, 2022
    GitHub repository for "Improving Video Generation for Multi-functional Applications"

    Improving Video Generation for Multi-functional Applications GitHub repository for "Improving Video Generation for Multi-functional Applications" Pape

    Bernhard Kratzwald 328 Dec 07, 2022
    Council-GAN - Implementation for our paper Breaking the Cycle - Colleagues are all you need (CVPR 2020)

    Council-GAN Implementation of our paper Breaking the Cycle - Colleagues are all you need (CVPR 2020) Paper Ori Nizan , Ayellet Tal, Breaking the Cycle

    ori nizan 260 Nov 16, 2022
    RGB-stacking 🛑 🟩 🔷 for robotic manipulation

    RGB-stacking 🛑 🟩 🔷 for robotic manipulation BLOG | PAPER | VIDEO Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes, Alex X. Lee*,

    DeepMind 95 Dec 23, 2022
    In this project we use both Resnet and Self-attention layer for cat, dog and flower classification.

    cdf_att_classification classes = {0: 'cat', 1: 'dog', 2: 'flower'} In this project we use both Resnet and Self-attention layer for cdf-Classification.

    3 Nov 23, 2022
    A playable implementation of Fully Convolutional Networks with Keras.

    keras-fcn A re-implementation of Fully Convolutional Networks with Keras Installation Dependencies keras tensorflow Install with pip $ pip install git

    JihongJu 202 Sep 07, 2022