ML models and internal tensors 3D visualizer

Related tags

Deep Learningviewer
Overview

logo

The free Zetane Viewer is a tool to help understand and accelerate discovery in machine learning and artificial neural networks. It can be used to open the AI black box by visualizing and understanding the model's architecture and internal data (feature maps, weights, biases and layers output tensors). It can be thought of as a tool to do neuroimaging or brain imaging for artificial neural networks and machine learning algorithms.

You can also launch your own Zetane workspace directly from your existing scripts or notebooks via a few commands using the Zetane Python API.

nodes tensors


Zetane Viewer

Installation

You can install the free Zetane viewer for Windows, Linux and Mac, and explore ZTN and ONNX files.

Download for Windows

Download for Linux

Download for Mac

Tutorial

In this video, we will show you how to load a Zetane or ONNX model, navigate the model and view different tensors:

Below is the step-by-step instruction of how to load and inspect a model in the Zetane viewer:

  • How to load a model

The viewer supports both .ONNX and .ZTN files. The ZTN files were generated from the Keras and Pytorch scripts shared in this Git repository. After launching the viewer, to load a Zetane model, simply click “Load Zetane Model” in the DATA I/O menu. To load an Onnx model, click on “Import ONNX Model” in the same menu. Below you can access the ZTN files for a few models to load. You can also access ONNX files from the ONNX Model Zoo.

loading

When a model is displayed in the Zetane engine, any components of the model can be accessed in a few clicks.

At the highest level, we have the model architecture which is composed of interconnected nodes and tensors. Each node represents an operator of the computational graph. Usually, an input tensor is passed to the model and as it goes through the nodes it will be transformed into intermediate tensors until we reach the output tensor of the model. In the Zetane engine, the data flows from left to right.

architecture

  • How to navigate

You may navigate the model viewer window by right clicking and dragging to explore the space and using the scroll wheel to zoom in and out. Here is the complete list of navigation instructions. You can change the behavior of the mouse wheel (either to zoom or to navigate) via the Mouse Zoom toggle in the top menu.

zoom

  • Loading custom model inputs

After loading a model you may want to send your own inputs to the model to inference. Zetane supports loading .npy, .npz, .png, .jpg, .pb (protobuf), .tiff, and .hdr files that match the input dimensions of the model. The Zetane engine will attempt to intelligently resize the file loaded (if possible) in order to send the data to the model. After loading and running the input, you will be able to explore in detail how your model interpreted the input data.

nodes tensors tensors

  • How to inspect different layers and feature maps

For each layer, you have the option to view all the feature maps and filters by clicking on the “Show Feature Maps” on each node. You may inspect the inputs and outputs and weights and biases using the tensor view bar.

featuremap

  • Tensor view bar

By clicking on the associated button, you can visualize inputs, outputs, weights and biases (if applicable) for each individual layer. You can also investigate the shape, type, mean and standard deviation of each tensor.

tensorview

Statistics about the tensor value and its distribution is given in the histogram in the top panel. You can also see the tensor name and shape. The tensor and its values is represented in the middle panel and the bottom section contains tensor visualization parameters and a refresh button which allow the user to refresh the tensor. This is useful when the input or the weights are changing in real-time.

tensorpanel

  • Styles of tensor visualization

Tensors can be inspected in different ways, including 3D view and 2D view with and without actual values.

tensorview2



Tensor View Screenshot
N-dimensional tensor projected in the 3D space tensor_viz_3d
N-dimensional tensor projected in the 2D space tensor_viz_2d
Tensor values and color representations of each value based on the gradient shown on the x-axis of the distribution histogram tensor_viz_color-values
Tensor values__ tensor_viz_values
Feature maps view when the tensor has shape of dimension 3 tensor_viz_values

Models

We have generated a few ZTN models for inspecting their architecture and internal tensors in the viewer. We have also provided the code used to generate these models.

Image Classification

Object Detection

Image Segmentation

Body, Face and Gesture Analysis

Image Manipulation

XAI

Classic Machine Learning


Installation

Install the Zetane Viewer here.


Comments
  • BUG: Viewer crashes when loading any model

    BUG: Viewer crashes when loading any model

    I've tried loading multiple models including emotion-ferplus (both onnx and ztn formats) but they always immediately crash the viewer.

    OS: Ubuntu 20.04 Zetane 1.3.2 Dump:

    LoadUniverse(): ZTN_REQUIRE_LOGIN = 1 
    online = 0 
    ================== ExposeIRnodes: ================== 
    @@@ ExposeIRnodes() n_IR_outputs = 51. 
     <- [Parameter1367_reshape1. 
     <- [Minus340_Output_0. 
     <- [Block352_Output_0. 
     <- [Convolution362_Output_0. 
     <- [Plus364_Output_0. 
     <- [ReLU366_Output_0. 
     <- [Convolution380_Output_0. 
     <- [Plus382_Output_0. 
     <- [ReLU384_Output_0. 
     <- [Pooling398_Output_0. 
     <- [Dropout408_Output_0. 
     <- [Convolution418_Output_0. 
     <- [Plus420_Output_0. 
     <- [ReLU422_Output_0. 
     <- [Convolution436_Output_0. 
     <- [Plus438_Output_0. 
     <- [ReLU440_Output_0. 
     <- [Pooling454_Output_0. 
     <- [Dropout464_Output_0. 
     <- [Convolution474_Output_0. 
     <- [Plus476_Output_0. 
     <- [ReLU478_Output_0. 
     <- [Convolution492_Output_0. 
     <- [Plus494_Output_0. 
     <- [ReLU496_Output_0. 
     <- [Convolution510_Output_0. 
     <- [Plus512_Output_0. 
     <- [ReLU514_Output_0. 
     <- [Pooling528_Output_0. 
     <- [Dropout538_Output_0. 
     <- [Convolution548_Output_0. 
     <- [Plus550_Output_0. 
     <- [ReLU552_Output_0. 
     <- [Convolution566_Output_0. 
     <- [Plus568_Output_0. 
     <- [ReLU570_Output_0. 
     <- [Convolution584_Output_0. 
     <- [Plus586_Output_0. 
     <- [ReLU588_Output_0. 
     <- [Pooling602_Output_0. 
     <- [Dropout612_Output_0. 
     <- [Dropout612_Output_0_reshape0. 
     <- [Times622_Output_0. 
     <- [Plus624_Output_0. 
     <- [ReLU636_Output_0. 
     <- [Dropout646_Output_0. 
     <- [Times656_Output_0. 
     <- [Plus658_Output_0. 
     <- [ReLU670_Output_0. 
     <- [Dropout680_Output_0. 
     <- [Times690_Output_0. 
    node_name = Node_0000000000_Times622_reshape1_Reshape. 
     -> [Parameter1367_reshape1. 
    node_name = Node_0000000001_Minus340_Sub. 
     -> [Minus340_Output_0. 
    node_name = Node_0000000002_Block352_Div. 
     -> [Block352_Output_0. 
    node_name = Node_0000000003_Convolution362_Conv. 
     -> [Convolution362_Output_0. 
    node_name = Node_0000000004_Plus364_Add. 
     -> [Plus364_Output_0. 
    node_name = Node_0000000005_ReLU366_Relu. 
     -> [ReLU366_Output_0. 
    node_name = Node_0000000006_Convolution380_Conv. 
     -> [Convolution380_Output_0. 
    node_name = Node_0000000007_Plus382_Add. 
     -> [Plus382_Output_0. 
    node_name = Node_0000000008_ReLU384_Relu. 
     -> [ReLU384_Output_0. 
    node_name = Node_0000000009_Pooling398_MaxPool. 
     -> [Pooling398_Output_0. 
    node_name = Node_0000000010_Dropout408_Dropout. 
     -> [Dropout408_Output_0. 
    node_name = Node_0000000011_Convolution418_Conv. 
     -> [Convolution418_Output_0. 
    node_name = Node_0000000012_Plus420_Add. 
     -> [Plus420_Output_0. 
    node_name = Node_0000000013_ReLU422_Relu. 
     -> [ReLU422_Output_0. 
    node_name = Node_0000000014_Convolution436_Conv. 
     -> [Convolution436_Output_0. 
    node_name = Node_0000000015_Plus438_Add. 
     -> [Plus438_Output_0. 
    node_name = Node_0000000016_ReLU440_Relu. 
     -> [ReLU440_Output_0. 
    node_name = Node_0000000017_Pooling454_MaxPool. 
     -> [Pooling454_Output_0. 
    node_name = Node_0000000018_Dropout464_Dropout. 
     -> [Dropout464_Output_0. 
    node_name = Node_0000000019_Convolution474_Conv. 
     -> [Convolution474_Output_0. 
    node_name = Node_0000000020_Plus476_Add. 
     -> [Plus476_Output_0. 
    node_name = Node_0000000021_ReLU478_Relu. 
     -> [ReLU478_Output_0. 
    node_name = Node_0000000022_Convolution492_Conv. 
     -> [Convolution492_Output_0. 
    node_name = Node_0000000023_Plus494_Add. 
     -> [Plus494_Output_0. 
    node_name = Node_0000000024_ReLU496_Relu. 
     -> [ReLU496_Output_0. 
    node_name = Node_0000000025_Convolution510_Conv. 
     -> [Convolution510_Output_0. 
    node_name = Node_0000000026_Plus512_Add. 
     -> [Plus512_Output_0. 
    node_name = Node_0000000027_ReLU514_Relu. 
     -> [ReLU514_Output_0. 
    node_name = Node_0000000028_Pooling528_MaxPool. 
     -> [Pooling528_Output_0. 
    node_name = Node_0000000029_Dropout538_Dropout. 
     -> [Dropout538_Output_0. 
    node_name = Node_0000000030_Convolution548_Conv. 
     -> [Convolution548_Output_0. 
    node_name = Node_0000000031_Plus550_Add. 
     -> [Plus550_Output_0. 
    node_name = Node_0000000032_ReLU552_Relu. 
     -> [ReLU552_Output_0. 
    node_name = Node_0000000033_Convolution566_Conv. 
     -> [Convolution566_Output_0. 
    node_name = Node_0000000034_Plus568_Add. 
     -> [Plus568_Output_0. 
    node_name = Node_0000000035_ReLU570_Relu. 
     -> [ReLU570_Output_0. 
    node_name = Node_0000000036_Convolution584_Conv. 
     -> [Convolution584_Output_0. 
    node_name = Node_0000000037_Plus586_Add. 
     -> [Plus586_Output_0. 
    node_name = Node_0000000038_ReLU588_Relu. 
     -> [ReLU588_Output_0. 
    node_name = Node_0000000039_Pooling602_MaxPool. 
     -> [Pooling602_Output_0. 
    node_name = Node_0000000040_Dropout612_Dropout. 
     -> [Dropout612_Output_0. 
    node_name = Node_0000000041_Times622_reshape0_Reshape. 
     -> [Dropout612_Output_0_reshape0. 
    node_name = Node_0000000042_Times622_MatMul. 
     -> [Times622_Output_0. 
    node_name = Node_0000000043_Plus624_Add. 
     -> [Plus624_Output_0. 
    node_name = Node_0000000044_ReLU636_Relu. 
     -> [ReLU636_Output_0. 
    node_name = Node_0000000045_Dropout646_Dropout. 
     -> [Dropout646_Output_0. 
    node_name = Node_0000000046_Times656_MatMul. 
     -> [Times656_Output_0. 
    node_name = Node_0000000047_Plus658_Add. 
     -> [Plus658_Output_0. 
    node_name = Node_0000000048_ReLU670_Relu. 
     -> [ReLU670_Output_0. 
    node_name = Node_0000000049_Dropout680_Dropout. 
     -> [Dropout680_Output_0. 
    node_name = Node_0000000050_Times690_MatMul. 
     -> [Times690_Output_0. 
    node_name = Node_0000000051_Plus692_Add. 
    @@@ ExposeIRnodes() [Outputs] = 1 --> 52. 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
    ***************** ValidateIRnodes: ***************** 
    ====================================  
    input_dims = [ 1, 1, 64, 64, ]. 
    --> input 0[Input3]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ ]. 
    --> input 1[Constant339]: Type 1; [0 dims] tensor  
    --------------- 
    input_dims = [ ]. 
    --> input 2[Constant343]: Type 1; [0 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 3, 3, ]. 
    --> input 3[Parameter3]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 1, ]. 
    --> input 4[Parameter4]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 64, 64, 3, 3, ]. 
    --> input 5[Parameter23]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 1, ]. 
    --> input 6[Parameter24]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 128, 64, 3, 3, ]. 
    --> input 7[Parameter63]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 128, 1, 1, ]. 
    --> input 8[Parameter64]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 128, 128, 3, 3, ]. 
    --> input 9[Parameter83]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 128, 1, 1, ]. 
    --> input 10[Parameter84]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 128, 3, 3, ]. 
    --> input 11[Parameter575]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 12[Parameter576]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 13[Parameter595]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 14[Parameter596]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 15[Parameter615]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 16[Parameter616]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 17[Parameter655]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 18[Parameter656]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 19[Parameter675]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 20[Parameter676]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 21[Parameter695]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 22[Parameter696]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 2, ]. 
    --> input 23[Dropout612_Output_0_reshape0_shape]: Type 7; [1 dims] tensor  
    --------------- 
    input_dims = [ 256, 4, 4, 1024, ]. 
    --> input 24[Parameter1367]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 2, ]. 
    --> input 25[Parameter1367_reshape1_shape]: Type 7; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, ]. 
    --> input 26[Parameter1368]: Type 1; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, 1024, ]. 
    --> input 27[Parameter1403]: Type 1; [2 dims] tensor  
    --------------- 
    input_dims = [ 1024, ]. 
    --> input 28[Parameter1404]: Type 1; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, 8, ]. 
    --> input 29[Parameter1693]: Type 1; [2 dims] tensor  
    --------------- 
    input_dims = [ 8, ]. 
    --> input 30[Parameter1694]: Type 1; [1 dims] tensor  
    --------------- 
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
    output_dims = [ 1, 8, ]. 
    --> output 0/1[Plus692_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 4096, 1024, ]. 
    --> output 1/1[Parameter1367_reshape1]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1, 64, 64, ]. 
    --> output 2/1[Minus340_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 1, 64, 64, ]. 
    --> output 3/1[Block352_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 4/1[Convolution362_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 5/1[Plus364_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 6/1[ReLU366_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 7/1[Convolution380_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 8/1[Plus382_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 9/1[ReLU384_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 32, 32, ]. 
    --> output 10/1[Pooling398_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 32, 32, ]. 
    --> output 11/1[Dropout408_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 12/1[Convolution418_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 13/1[Plus420_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 14/1[ReLU422_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 15/1[Convolution436_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 16/1[Plus438_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 17/1[ReLU440_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 16, 16, ]. 
    --> output 18/1[Pooling454_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 16, 16, ]. 
    --> output 19/1[Dropout464_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 20/1[Convolution474_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 21/1[Plus476_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 22/1[ReLU478_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 23/1[Convolution492_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 24/1[Plus494_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 25/1[ReLU496_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 26/1[Convolution510_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 27/1[Plus512_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 28/1[ReLU514_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 29/1[Pooling528_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 30/1[Dropout538_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 31/1[Convolution548_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 32/1[Plus550_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 33/1[ReLU552_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 34/1[Convolution566_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 35/1[Plus568_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 36/1[ReLU570_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 37/1[Convolution584_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 38/1[Plus586_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 39/1[ReLU588_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 4, 4, ]. 
    --> output 40/1[Pooling602_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 4, 4, ]. 
    --> output 41/1[Dropout612_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 4096, ]. 
    --> output 42/1[Dropout612_Output_0_reshape0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 43/1[Times622_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 44/1[Plus624_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 45/1[ReLU636_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 46/1[Dropout646_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 47/1[Times656_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 48/1[Plus658_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 49/1[ReLU670_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 50/1[Dropout680_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 8, ]. 
    --> output 51/1[Times690_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 
    ValidateIRnodes() 52 --> 52=52=52 valid output tensors  
    --------------- 
    ----------------- ValidateIRnodes. ----------------- 
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    *** ExposeIRnodes: 1 --> 52 Outputs. 
    *** type: [FLOAT] ~?= STRING 
    TVZ10()  input_dims = [ 1, 1, 64, 64, ]. 
    --> input [0]: Type FLOAT; [4 dims] tensor  
    --------------- 
    Warning: Could not load "/opt/zetane/lib/graphviz/libgvplugin_pango.so.6" - file not found
    terminate called after throwing an instance of 'std::invalid_argument'
      what():  stod
    /usr/bin/zetane: line 26: 37102 Aborted                 (core dumped) ./Zetane --server
    
    
    opened by paulgavrikov 4
  • Free Trial not Available

    Free Trial not Available

    After clicking "upgrade 2 pro", I arrive at your pricing page. Clicking on "free trial" redirects me to the documentation, which instructs me to click the button "upgrade 2 pro". Now I'm stuck in an infinite loop and unhappy about it.

    I'd like to successfully exit this loop and try your product. Any Tips?

    opened by Whadup 3
  • sorry, i install deb in ubuntu20.04,but when i use it to load input jpg ,it crash,how can i get the log to find result

    sorry, i install deb in ubuntu20.04,but when i use it to load input jpg ,it crash,how can i get the log to find result

    (base) [email protected]:~/下载$ sudo dpkg -i Zetane-1.7.0.deb (正在读取数据库 ... 系统当前共安装有 330200 个文件和目录。) 准备解压 Zetane-1.7.0.deb ... 正在解压 zetane (1.7.0) 并覆盖 (1.7.0) ... 正在设置 zetane (1.7.0) ... 正在处理用于 gnome-menus (3.36.0-1ubuntu1) 的触发器 ... 正在处理用于 desktop-file-utils (0.24-1ubuntu3) 的触发器 ... 正在处理用于 mime-support (3.64ubuntu1) 的触发器 ... 正在处理用于 hicolor-icon-theme (0.17-2) 的触发器 ...

    opened by mathpopo 2
  • engine is not launched after running example 'hello world' code

    engine is not launched after running example 'hello world' code

    By following the guide here https://docs.zetane.com/getting_started.html#installation, I created a scripy to run the 'hello world' code. However, the engine was launched not shown anything

    OS: Windows 10.0 Zetane 1.7.0

    Console output:

    Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! running process: /usr/bin/zetane --server 127.0.0.1 --port 4004 Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! Dialing Zetane... Connected to Zetane Engine! image

    opened by wftubby 0
  • engine is not launched after running example 'hello world' code

    engine is not launched after running example 'hello world' code

    By following the guide here https://docs.zetane.com/getting_started.html#installation, I created a scripy to run the 'hello world' code. However, the engine was not launched but keep printing "Dialing Zetane"

    OS: Ubuntu 18.04 Zetane 1.7.0

    Console output:

    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    running process: /usr/bin/zetane --server 127.0.0.1 --port 4004
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    
    
    opened by akzing-hz 6
Releases(v1.7.4)
  • v1.7.4(Jun 1, 2022)

    Viewer Engine

    • Added support for ONNX 1.10.2
    • Added support for ONNX Runtime 1.10.0
    • Added support for Keras/TensorFlow 2.9.1
    • Improved progress notifications when loading Keras models
    • Fixed crash cause by nested Keras models.
    • Reduced Tensor viewer memory usage
    • Dropped support for Ubuntu 16.04 LTS. See the up-to-date Minimum Requirements.
    • Deprecated support for macOS 10.14 Mojave

    API

    • Added the Zetane API context manager to automate view updates and cleanup, resulting in less verbose code.
    • Added support for Python 3.9
    • Dropped support for Python 3.6
    • Fixed protobuf dependency versioning
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.7.4.deb(273.45 MB)
    Zetane-1.7.4.dmg(312.91 MB)
    Zetane-1.7.4.msi(300.01 MB)
  • 1.7.0(Nov 15, 2021)

  • 1.6.2(Sep 22, 2021)

    • Added output blocks for models to prevent navigation to the end of the model graph
    • Added a Top-K output view for tensors that match certain shapes, e.g. (1, N). Classification models now have a more human understandable output.
    • Update to onnxruntime 1.8.1 to support latest ONNX opset.
    • Improve autodetection of input shapes to allow more inputs to pass inference without shape errors.
    • Fixes for RAM overuse
    • Fixes for Mesh API
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.6.2.deb(347.23 MB)
    Zetane-1.6.2.dmg(326.94 MB)
    Zetane-1.6.2.msi(301.44 MB)
  • 1.5.0(Jun 16, 2021)

  • 1.4.0(May 26, 2021)

  • 1.3.0(Apr 21, 2021)

    • When ONNX models are loaded, an inference pass with sample data is run by default. That means all tensors / feature maps / weights / biases should be viewable immediately after input load. Please let us know if there are models that don't succeed at this initial pass so we can fix them!
    Screen Shot 2021-04-21 at 11 55 29 AM

    (PRO) User input nodes are now attached to the model architecture diagram. When using Zetane Viewer Pro ($15/month) you can load custom inputs and send them through the model. Currently supported formats are .npy, .npz, .pb, and the majority of image formats (jpg, png, tiff, hdr, pic). Screen Shot 2021-04-21 at 11 53 16 AM Screen Shot 2021-04-21 at 12 10 19 PM Screen Shot 2021-04-21 at 11 54 01 AM

    (PRO) When user inputs are misshapen, the engine will display an error about the model's shape expectation. Note that this feature is also usable by free users without the error popup, the input node will load the user input and show dimensions before attempting to run inference with the model. Screen Shot 2021-04-21 at 11 59 54 AM Screen Shot 2021-04-21 at 12 00 10 PM

    (PRO) Any errors during model inference will also appear in the UI. An example is the shape error above. Individual graph operations may fail at any point during the inference pass-- the engine will attempt to populate the graph outputs up until the point of the error, a stack trace of the model run.

    As always, we welcome feedback, bug reports, and any suggestions you might have.

    Source code(tar.gz)
    Source code(zip)
    Zetane-1.3.0.deb(395.53 MB)
    Zetane-1.3.0.dmg(451.85 MB)
    Zetane-1.3.0.msi(452.15 MB)
  • 1.2.0(Apr 5, 2021)

    • Shape mismatch errors for running model inference are shown in the UI, describing the expected input and the given input. (PRO)
    Screen Shot 2021-04-05 at 12 02 59 PM
    • Changed default UI interaction with a mouse wheel to zoom by default, right click to drag the UI.
    • Panels now scroll or move on hover, not just after being selected.
    • Tensor viewer displays the original shape from file or API, without reordering the dimensions to fit the view panel.
    • User notification for version upgrade now appears in the UI.
    • Mac / Linux now run in API mode by default.
    • Added a new ZTN snapshot for XAI features.
    • User inputs now show above the Model Explorer panel's input node.
    • A number of bug fixes and performance improvements
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.2.0.deb(373.68 MB)
    Zetane-1.2.0.dmg(451.16 MB)
    Zetane-1.2.0.msi(438.49 MB)
  • 1.1.4(Feb 22, 2021)

Owner
Zetane Systems
Zetane Systems
AI-Fitness-Tracker - AI Fitness Tracker With Python

AI-Fitness-Tracker We have build a AI based Fitness Tracker using OpenCV and Pyt

Sharvari Mangale 5 Feb 09, 2022
The Generic Manipulation Driver Package - Implements a ROS Interface over the robotics toolbox for Python

Armer Driver Armer aims to provide an interface layer between the hardware drivers of a robotic arm giving the user control in several ways: Joint vel

QUT Centre for Robotics (QCR) 13 Nov 26, 2022
This is the official repository of the paper Stocastic bandits with groups of similar arms (NeurIPS 2021). It contains the code that was used to compute the figures and experiments of the paper.

Experiments How to reproduce experimental results of Stochastic bandits with groups of similar arms submitted paper ? Section 5 of the paper To reprod

Fabien 0 Oct 25, 2021
Codes accompanying the paper "Learning Nearly Decomposable Value Functions with Communication Minimization" (ICLR 2020)

NDQ: Learning Nearly Decomposable Value Functions with Communication Minimization Note This codebase accompanies paper Learning Nearly Decomposable Va

Tonghan Wang 69 Nov 26, 2022
EvoJAX is a scalable, general purpose, hardware-accelerated neuroevolution toolkit

EvoJAX: Hardware-Accelerated Neuroevolution EvoJAX is a scalable, general purpose, hardware-accelerated neuroevolution toolkit. Built on top of the JA

Google 598 Jan 07, 2023
🎃 Core identification module of AI powerful point reading system platform.

ppReader-Kernel Intro Core identification module of AI powerful point reading system platform. Usage 硬件: Windows10、GPU:nvdia GTX 1060 、普通RBG相机 软件: con

CrashKing 1 Jan 11, 2022
3D-Transformer: Molecular Representation with Transformer in 3D Space

3D-Transformer: Molecular Representation with Transformer in 3D Space

55 Dec 19, 2022
A PyTorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

A PyTorch implementation of V-Net Vnet is a PyTorch implementation of the paper V-Net: Fully Convolutional Neural Networks for Volumetric Medical Imag

Matthew Macy 606 Dec 21, 2022
Graph Convolutional Networks for Temporal Action Localization (ICCV2019)

Graph Convolutional Networks for Temporal Action Localization This repo holds the codes and models for the PGCN framework presented on ICCV 2019 Graph

Runhao Zeng 318 Dec 06, 2022
DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene.

DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene. We achieve NeRF-comparable novel-view synthesis quality with super-fast convergence.

sunset 709 Dec 31, 2022
A python implementation of Yolov5 to detect fire or smoke in the wild in Jetson Xavier nx and Jetson nano

yolov5-fire-smoke-detect-python A python implementation of Yolov5 to detect fire or smoke in the wild in Jetson Xavier nx and Jetson nano You can see

20 Dec 15, 2022
Probabilistic Entity Representation Model for Reasoning over Knowledge Graphs

Implementation for the paper: Probabilistic Entity Representation Model for Reasoning over Knowledge Graphs, Nurendra Choudhary, Nikhil Rao, Sumeet Ka

Nurendra Choudhary 8 Nov 15, 2022
This repository contains the source codes for the paper AtlasNet V2 - Learning Elementary Structures.

AtlasNet V2 - Learning Elementary Structures This work was build upon Thibault Groueix's AtlasNet and 3D-CODED projects. (you might want to have a loo

Théo Deprelle 123 Nov 11, 2022
ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation

ENet in Caffe Execution times and hardware requirements Network 1024x512 1280x720 Parameters Model size (fp32) ENet 20.4 ms 32.9 ms 0.36 M 1.5 MB SegN

Timo Sämann 561 Jan 04, 2023
PyTorch DepthNet Training on Still Box dataset

DepthNet training on Still Box Project page This code can replicate the results of our paper that was published in UAVg-17. If you use this repo in yo

Clément Pinard 115 Nov 21, 2022
Official PyTorch implementation for FastDPM, a fast sampling algorithm for diffusion probabilistic models

Official PyTorch implementation for "On Fast Sampling of Diffusion Probabilistic Models". FastDPM generation on CIFAR-10, CelebA, and LSUN datasets. S

Zhifeng Kong 68 Dec 26, 2022
Vision Deep-Learning using Tensorflow, Keras.

Welcome! I am a computer vision deep learning developer working in Korea. This is my blog, and you can see everything I've studied here. https://www.n

kimminjun 6 Dec 14, 2022
Source code for "Understanding Knowledge Integration in Language Models with Graph Convolutions"

Graph Convolution Simulator (GCS) Source code for "Understanding Knowledge Integration in Language Models with Graph Convolutions" Requirements: PyTor

yifan 10 Oct 18, 2022
PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis

WaveGrad2 - PyTorch Implementation PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis. Status (202

Keon Lee 59 Dec 06, 2022
Code for "Human Pose Regression with Residual Log-likelihood Estimation", ICCV 2021 Oral

Human Pose Regression with Residual Log-likelihood Estimation [Paper] [arXiv] [Project Page] Human Pose Regression with Residual Log-likelihood Estima

JeffLi 347 Dec 24, 2022