Lolviz - A simple Python data-structure visualization tool for lists of lists, lists, dictionaries; primarily for use in Jupyter notebooks / presentations

Related tags

Deep Learninglolviz
Overview

lolviz

By Terence Parr. See Explained.ai for more stuff.

A very nice looking javascript lolviz port with improvements by Adnan M.Sagar.

A simple Python data-structure visualization tool that started out as a List Of Lists (lol) visualizer but now handles arbitrary object graphs, including function call stacks! lolviz tries to look out for and format nicely common data structures such as lists, dictionaries, linked lists, and binary trees. This package is primarily for use in teaching and presentations with Jupyter notebooks, but could also be used for debugging data structures. Useful for devoting machine learning data structures, such as decision trees, as well.

It seems that I'm always trying to describe how data is laid out in memory to students. There are really great data structure visualization tools but I wanted something I could use directly via Python in Jupyter notebooks.

The look and idea was inspired by the awesome Python tutor. The graphviz/dot tool does all of the heavy lifting underneath for layout; my contribution is primarily making graphviz display objects in a nice way.

Functionality

There are currently a number of functions of interest that return graphviz.files.Source objects:

  • listviz(): Horizontal list visualization
  • lolviz(): List of lists visualization with the first list vertical and the nested lists horizontal.
  • treeviz(): Binary trees visualized top-down ala computer science.
  • objviz(): Generic object graph visualization that knows how to find lists of lists (like lolviz()) and linked lists. Trees are also displayed reasonably, but with left to right orientation instead of top-down (a limitation of graphviz). Here is an example linked list and dictionary:

  • callsviz(): Visualize the call stack and anything pointed to by globals, locals, or parameters. You can limit the variables displayed by passing in a list of varnames as an argument.
  • callviz(): Same as callsviz() but displays only the current function's frame or you can pass in a Python stack frame object to display.
  • matrixviz(data): Display numpy ndarray; only 1D and 2D at moment.
  • strviz(): Show a string like an array.

Given the return value in generic Python, simply call method view() on the returned object to display the visualization. From jupyter, call function IPython.display.display() with the returned object as an argument. Function arguments are in italics.

Check out the examples.

Installation

First you need graphviz (more specifically the dot executable). On a mac it's easy:

$ brew install graphviz

Then just install the lolviz Python package:

$ pip install lolviz

or upgrade to the latest version:

$ pip install -U lolviz

Usage

From within generic Python, you can get a window to pop up using the view() method:

from lolviz import *
data = ['hi','mom',{3,4},{"parrt":"user"}]
g = listviz(data)
print(g.source) # if you want to see the graphviz source
g.view() # render and show graphviz.files.Source object

From within Jupyter notebooks you can avoid the render() call because Jupyter knows how to display graphviz.files.Source objects:

For more examples that you can cut-and-paste, please see the jupyter notebook full of examples.

Preferences

There are global preferences you can set that affect the display for long values:

  • prefs.max_str_len (Default 20). How many chars in a string representation of a value before we abbreviate with .... E.g.,:
  • prefs.max_horiz_array_len (Default 70) Lists can quickly become too wide and distort the visualization. This preference lets you set how long the combined string representations of the list values can get before we use a vertical representation of the list. E.g.,:
  • prefs.max_list_elems. Horizontal and vertical lists and sets show maximum of 10 (default) elements.
  • prefs.float_precision. How many decimal places to show for floats (default is 5).

Implementation notes

Mostly notes for parrt to remember things.

Graphviz

  • Ugh. shape=record means html-labels can't use ports. warning!

  • warning: <td> and </td> must be on same line or row is super wide!

Deploy

$ python setup.py sdist upload 

Or to install locally

$ cd ~/github/lolviz
$ pip install .
Comments
  • name 'unicode' is not defined with Python 3.6

    name 'unicode' is not defined with Python 3.6

    Hi, thanks for this great python package.

    I noticed the follow error when I do this in jupyter on python-3.6.2

    import lolviz
    lolviz.objviz('1234567890')
    
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-4-7a8f0cba785c> in <module>()
          1 import lolviz
    ----> 2 lolviz.objviz('1234567890')
    
    /usr/lib/python3.6/site-packages/lolviz.py in objviz(o, orientation)
        217 """ % orientation
        218     reachable = closure(o)
    --> 219     s += obj_nodes(reachable)
        220     s += obj_edges(reachable)
        221     s += "}\n"
    
    /usr/lib/python3.6/site-packages/lolviz.py in obj_nodes(nodes)
        229     # currently only making subgraph cluster for linked lists
        230     # otherwise it squishes trees.
    --> 231     max_edges_for_type,subgraphs = connected_subgraphs(nodes)
        232     c = 1
        233     for g in subgraphs:
    
    /usr/lib/python3.6/site-packages/lolviz.py in connected_subgraphs(reachable, varnames)
        785     of sets containing the id()s of all nodes in a specific subgraph
        786     """
    --> 787     max_edges_for_type = max_edges_in_connected_subgraphs(reachable, varnames)
        788 
        789     reachable = closure(reachable, varnames)
    
    /usr/lib/python3.6/site-packages/lolviz.py in max_edges_in_connected_subgraphs(reachable, varnames)
        839     """
        840     max_edges_for_type = defaultdict(int)
    --> 841     reachable = closure(reachable, varnames)
        842     reachable = [p for p in reachable if isplainobj(p)]
        843     for p in reachable:
    
    /usr/lib/python3.6/site-packages/lolviz.py in closure(p, varnames)
        706     from but don't include frame objects.
        707     """
    --> 708     return closure_(p, varnames, set())
        709 
        710 
    
    /usr/lib/python3.6/site-packages/lolviz.py in closure_(p, varnames, visited)
        710 
        711 def closure_(p, varnames, visited):
    --> 712     if p is None or isatom(p):
        713         return []
        714     if id(p) in visited:
    
    /usr/lib/python3.6/site-packages/lolviz.py in isatom(p)
        691 
        692 
    --> 693 def isatom(p): return type(p) == int or type(p) == float or type(p) == str or type(p) == unicode
        694 
        695 
    
    NameError: name 'unicode' is not defined
    
    py2py3 compatibility 
    opened by faultylee 5
  • Hidden values of nested table

    Hidden values of nested table

    I find table in lolviz can not show values if cell in one row is a list, as:

    
    T = [
        ['11','12','13','14',['a','b','c'],'16']
    ]
    objviz(T)
    

    Only the a, b, c are shown and 11, 12, 13, 14, and 16 are not shown, is this configurable? Thanks!

    question 
    opened by pytkr 2
  • Invalid syntax on Python 3.6 and lolviz 1.2.1

    Invalid syntax on Python 3.6 and lolviz 1.2.1

    Contd. from https://github.com/parrt/lolviz/issues/11#issuecomment-326474125

    Traceback (most recent call last):
    
      File "/Users/srid/code/ipython/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2862, in run_code
        exec(code_obj, self.user_global_ns, self.user_ns)
    
      File "<ipython-input-1-ae470ca34f62>", line 1, in <module>
        from lolviz import *
    
      File "/Users/srid/code/ipython/lib/python3.6/site-packages/lolviz.py", line 442
        print "hashcode =", hashcode(key)
                         ^
    SyntaxError: invalid syntax
    
    
    py2py3 compatibility 
    opened by srid 1
  • Can't install on Python 3

    Can't install on Python 3

    This package is currently registered as Python 2.7 only in PyPI. This is what happens if you try to install it on Python 3.6:

    (venv) $ pip install lolviz
    Collecting lolviz
      Using cached lolviz-1.2.tar.gz
    lolviz requires Python '<3' but the running Python is 3.6.0
    

    I think it's just a matter of adding the Python 3 classifier in setup.py, because as far as I can see the code is fine for Python 3.

    duplicate 
    opened by miguelgrinberg 1
  • Dictionaries with tuple values are rendered incorrectly

    Dictionaries with tuple values are rendered incorrectly

    from lolviz import *
    
    dict_tuple_values = dictviz({'a': (1, 2)})
    dict_tuple_values.render(view=True, cleanup=True)
    

    Is rendered as: image

    This only happens with 2-tuples. Any other tuple length is rendered as expected.

    This is the offending line: https://github.com/parrt/lolviz/blob/a6fc29b008a16993738416e793de71c3bff4175d/lolviz.py#L159

    What is the significance of ... and len(el) == 2 ?

    enhancement 
    opened by DeepSpace2 1
  • implement multi child tree

    implement multi child tree

    Hi, I was looking for a package to visualize Tree Search Algorithm Recently and I found this repository and really liked it. But for tree visualization, the treeviz function can only support binary tree with child name left and right.

    So I made some modification to support multi children and specifying child name. I add 2 new parameters for treeviz(), childfields and show_all_children. The variable name in childfields will be recognized as child node. And if show_all_children=False, it will only visualize the child names exist, else it will show all the names in childfieds.

    I know you may be busy and this repository hasn't been updated for a long time. You can check these modification anytime free. I am so glad to receive any suggestions from you.

    enhancement 
    opened by sunyiwei24601 5
  • Could we add a

    Could we add a "super" display function that chooses the best one based on the datatype?

    Hello @parrt! I used your lolviz project a few years ago, and I rediscovered it today. It's awesome!

    Could we add a "super" display function that chooses the best one based on the datatype?

    When reading the documentation, it shows like 8 different functions, and I don't want to spend my time thinking about the name of one or another function for one or another datatype. What is described in this documentation is almost trivially translated to Python code.

    modes = [ "str", "matrix", "call", "calls", "obj", "tree", "lol", "list" ]
    
    def unified_lolviz(obj, mode=None):
        """ Unified function to display `obj` with lolviz, in Jupyter notebook only."""
        if mode == "str" or isinstance(obj):
            return strviz(obj)
        if mode == "matrix" or "<class 'numpy.ndarray'>" == str(type(obj)):
            # can't use isinstance(obj, np.ndarray) without import numpy!
            return matrixviz(obj)
        if mode == "call": return callviz()
        if mode == "calls": return callviz()
        if mode == "lol" or isinstance(obj, list) and obj and isinstance(obj[0], list):
            # obj is a list, is non empty, and obj[0] is a list!
            return lolviz(obj)
        if mode == "list" or isinstance(obj, list):
            return listviz(obj)
        return objviz(obj)  # default
    

    So I'm opening this ticket: if you think this could be added to the library, can we discuss it here, and then I can take care of writing it, testing it, sending a pull-request, and you can merge, and then update on Pypi! What do you think?

    Regards from France, @Naereen

    enhancement 
    opened by Naereen 11
  • Create typed Class-Structure Diagram

    Create typed Class-Structure Diagram

    This isn't a bug, but a question for help / hints.

    I would like to use lolviz to create a graph of my python class structure. Every class-attribute is type annotated, so it should be possible to like their interactions without creating class instances. Here is a minimal example:

    from copy import deepcopy
    from lolviz import *
    class Workout:
        def __init__(
            self,
            date: dt.date,
            name: str = "",
            duration: int = 0
        ):
            # assert isinstance(tss, (np.number, int))
            self.date: dt.date = date
            self.name: str = name
            self.duration: int = duration  # in seconds
            self.done: boolean = False
    
    class Athlete:
        def __init__(
            self,
            name: str,
            birthday: dt.date,
            sports: List[str]
        ):
            self.name: str = name
            self.birthday: dt.date = birthday
            self.sports: List[str] = sports
    
    
    class DataContainer:
        def __init__(self, 
                     athlete: Athlete, 
                     tasks: List[Workout] = [], 
                     fulfilled: List[Workout] = []):
            self.athelete: Athlete = athlete
            self.tasks: List[Workout] = [w for w in tasks if isinstance(w, Workout)]
            self.fulfilled: List[Workout] = [w for w in fulfilled if isinstance(w, Workout)]
    
    me = Athlete("nico", dt(1990,3,1).date(), sports=["running","climbing"])
    t1 = Workout(date=dt(2020,1,1).date(), name="5k Run")
    f1 = deepcopy(t1)
    f1.done = True
    dc1 = DataContainer(me, tasks=[t1], fulfilled=[f1])
    

    Now I can use objviz(dc1) to create the following diagram: Screenshot 2021-01-13 at 16 47 07

    What I actually would like to achieve is a command like classviz(DataContainer) which will give me a similar chart, but not with the actual attribute values but their types. For sure there will be other small changes, but that's the basic idea.

    What I already can do is something like:

    def get_types(annotated_class):
        return (annotated_class.__name__, {k: v.__name__ for k,v in annotated_class.__init__.__annotations__.items()})
    
    get_types(Workout)
    

    which gives me something like: ('Workout', {'date': 'date', 'name': 'str', 'duration': 'int'}). How ever I don't find a proper way to create similar table-elements which contain Workout in the header and the name-attribute mapping in it's body.

    Can someone give me a hint, how to create such tables manually? I am also happy for any additional advices

    feature 
    opened by krlng 0
  • visualization fails when variables contain " chars">

    visualization fails when variables contain "<" and ">" chars

    from lolviz import objviz a={"hello":"<"} objviz(a).render() Error: Source.gv: syntax error in line 16 scanning a HTML string (missing '>'? bad nesting? longer than 16384?) String starting:<

    opened by ami-navon 1
    Releases(1.4)
    Owner
    Terence Parr
    Creator of the ANTLR parser generator. Professor at Univ of San Francisco, computer science and data science. Working mostly on machine learning stuff now.
    Terence Parr
    RSC-Net: 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos

    RSC-Net: 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos Implementation for "3D Human Pose, Shape and Texture from Low-Resoluti

    XiangyuXu 42 Nov 10, 2022
    [CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

    Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

    Junyong Lee 151 Dec 30, 2022
    The dataset of tweets pulling from Twitters with keyword: Hydroxychloroquine, location: US, Time: 2020

    HCQ_Tweet_Dataset: FREE to Download. Keywords: HCQ, hydroxychloroquine, tweet, twitter, COVID-19 This dataset is associated with the paper "Understand

    2 Mar 16, 2022
    An example project demonstrating how the Autonomous Learning Library can be used to build new reinforcement learning agents.

    About This repository shows how Autonomous Learning Library can be used to build new reinforcement learning agents. In particular, it contains a model

    Chris Nota 5 Aug 30, 2022
    From Perceptron model to Deep Neural Network from scratch in Python.

    Neural-Network-Basics Aim of this Repository: From Perceptron model to Deep Neural Network (from scratch) in Python. ** Currently working on a basic N

    Aditya Kahol 1 Jan 14, 2022
    Direct design of biquad filter cascades with deep learning by sampling random polynomials.

    IIRNet Direct design of biquad filter cascades with deep learning by sampling random polynomials. Usage git clone https://github.com/csteinmetz1/IIRNe

    Christian J. Steinmetz 55 Nov 02, 2022
    VGG16 model-based classification project about brain tumor detection.

    Brain-Tumor-Classification-with-MRI VGG16 model-based classification project about brain tumor detection. First, you can check what people are doing o

    Atakan Erdoğan 2 Mar 21, 2022
    [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

    Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training Code for NeurIPS 2021 paper "Better Safe Than Sorry: Preventing Delu

    Lue Tao 29 Sep 20, 2022
    Repository for code and dataset for our EMNLP 2021 paper - “So You Think You’re Funny?”: Rating the Humour Quotient in Standup Comedy.

    AI-OpenMic Dataset The dataset is available for download via the follwing link. Repository for code and dataset for our EMNLP 2021 paper - “So You Thi

    6 Oct 26, 2022
    Integrated Semantic and Phonetic Post-correction for Chinese Speech Recognition

    Integrated Semantic and Phonetic Post-correction for Chinese Speech Recognition | paper | dataset | pretrained detection model | Authors: Yi-Chang Che

    Yi-Chang Chen 1 Aug 23, 2022
    FS-Mol: A Few-Shot Learning Dataset of Molecules

    FS-Mol is A Few-Shot Learning Dataset of Molecules, containing molecular compounds with measurements of activity against a variety of protein targets. The dataset is presented with a model evaluation

    Microsoft 114 Dec 15, 2022
    YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

    Yolo v4, v3 and v2 for Windows and Linux (neural networks for object detection) Paper YOLO v4: https://arxiv.org/abs/2004.10934 Paper Scaled YOLO v4:

    Alexey 20.2k Jan 09, 2023
    A deep learning model for style-specific music generation.

    DeepJ: A model for style-specific music generation https://arxiv.org/abs/1801.00887 Abstract Recent advances in deep neural networks have enabled algo

    Henry Mao 704 Nov 23, 2022
    pytorch implementation of ABC : Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning

    ABC:Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning, NeurIPS 2021 pytorch implementation of ABC : Auxiliary Balanced Class

    Hyuck Lee 25 Dec 22, 2022
    Instance-wise Occlusion and Depth Orders in Natural Scenes (CVPR 2022)

    Instance-wise Occlusion and Depth Orders in Natural Scenes Official source code. Appears at CVPR 2022 This repository provides a new dataset, named In

    27 Dec 27, 2022
    A Collection of Papers and Codes for ICCV2021 Low Level Vision and Image Generation

    A Collection of Papers and Codes for ICCV2021 Low Level Vision and Image Generation

    196 Jan 05, 2023
    Official code for "Decoupling Zero-Shot Semantic Segmentation"

    Decoupling Zero-Shot Semantic Segmentation This is the official code for the arxiv. ZegFormer is the first framework that decouple the zero-shot seman

    Jian Ding 108 Dec 30, 2022
    Решения, подсказки, тесты и утилиты для тренировки по алгоритмам от Яндекса.

    Решения и подсказки к тренировке по алгоритмам от Яндекса Что есть внутри Решения с подсказками и комментариями; рекомендую сначала смотреть md файл п

    Yankovsky Andrey 50 Dec 26, 2022
    CUP-DNN is a deep neural network model used to predict tissues of origin for cancers of unknown of primary.

    CUP-DNN CUP-DNN is a deep neural network model used to predict tissues of origin for cancers of unknown of primary. The model was trained on the expre

    1 Oct 27, 2021
    Official implementation of "Motif-based Graph Self-Supervised Learning forMolecular Property Prediction"

    Motif-based Graph Self-Supervised Learning for Molecular Property Prediction Official Pytorch implementation of NeurIPS'21 paper "Motif-based Graph Se

    zaixi 71 Dec 20, 2022