Neural network graphs and training metrics for PyTorch, Tensorflow, and Keras.

Overview

HiddenLayer

A lightweight library for neural network graphs and training metrics for PyTorch, Tensorflow, and Keras.

HiddenLayer is simple, easy to extend, and works great with Jupyter Notebook. It's not intended to replace advanced tools, such as TensorBoard, but rather for cases where advanced tools are too big for the task. HiddenLayer was written by Waleed Abdulla and Phil Ferriere, and is licensed under the MIT License.

1. Readable Graphs

Use HiddenLayer to render a graph of your neural network in Jupyter Notebook, or to a pdf or png file. See Jupyter notebook examples for TensorFlow, PyTorch, and Keras.

The graphs are designed to communicate the high-level architecture. Therefore, low-level details are hidden by default (e.g. weight initialization ops, gradients, internal ops of common layer types, ...etc.). HiddenLayer also folds commonly used sequences of layers together. For example, the Convolution -> RELU -> MaxPool sequence is very common, so they get merged into one box for simplicity.

Customizing Graphs

The rules for hiding and folding nodes are fully customizable. You can use graph expressions and transforms to add your own rules. For example, this rule folds all the nodes of a bottleneck block of a ResNet101 into one node.

    # Fold bottleneck blocks
    ht.Fold("((ConvBnRelu > ConvBnRelu > ConvBn) | ConvBn) > Add > Relu", 
            "BottleneckBlock", "Bottleneck Block"),

2. Training Metrics in Jupyter Notebook

If you run training experiments in Jupyter Notebook then you might find this useful. You can use it to plot loss and accuracy, histograms of weights, or visualize activations of a few layers.

Outside Jupyter Notebook:

You can use HiddenLayer outside Jupyter Notebook as well. In a Python script running from command line, it'll open a separate window for the metrics. And if you're on a server without a GUI, you can save snapshots of the graphs to png files for later inspection. See history_canvas.py for an example of this use case.

3. Hackable

HiddenLayer is a small library. It covers the basics, but you'll likely need to extend it for your own use case. For example, say you want to represent the model accuracy as a pie chart rather than a plot. This can be done by extending the Canvas class and adding a new method as such:

class MyCanvas(hl.Canvas):
    """Extending Canvas to add a pie chart method."""
    def draw_pie(self, metric):
        # set square aspect ratio
        self.ax.axis('equal')
        # Get latest value of the metric
        value = np.clip(metric.data[-1], 0, 1)
        # Draw pie chart
        self.ax.pie([value, 1-value], labels=["Accuracy", ""])

See the pytorch_train.ipynb or tf_train.ipynb for an example.

The keras_train.ipynb notebook contains an actual training example that illustrates how to create a custom Canvas to plot a confusion matrix alongside validation metrics:

Demos

PyTorch:

TensorFlow:

  • tf_graph.ipynb: This notebook illustrates how to generate graphs for various TF SLIM models.
  • tf_train.ipynb: Demonstrates tracking and visualizing training metrics with TensorFlow.
  • history_canvas.py: An example of using HiddenLayer without a GUI.

Keras:

  • keras_graph.ipynb: This notebook illustrates how to generate graphs for various Keras models.
  • keras_train.ipynb: Demonstrates model graphing, visualization of training metrics, and how to create a custom Keras callback that uses a subclassed Canvas in order to plot a confusion matrix at the end of each training epoch.

Contributing

HiddenLayer is released under the MIT license. Feel free to extend it or customize it for your needs. If you discover bugs, which is likely since this is an early release, please do report them or submit a pull request.

If you like to contribute new features, here are a few things we wanted to add but never got around to it:

  • Support for older versions of Python. Currently, it's only tested on Python 3.6.
  • Optimization to support logging big experiments.

Installation

1. Prerequisites

  • a. Python3, Numpy, Matplotlib, and Jupyter Notebook.

  • b. Either TensorFlow or PyTorch

  • c. GraphViz and its Python wrapper to generate network graphs. The easiest way to install it is

    If you use Conda:

    conda install graphviz python-graphviz

    Otherwise:

2. Install HiddenLayer

a. Clone From GitHub (Developer Mode)

Use this if you want to edit or customize the library locally.

# Clone the repository
git clone [email protected]:waleedka/hiddenlayer.git
cd hiddenlayer

# Install in dev mode
pip install -e .

b. Using PIP ("stable" release)

pip install hiddenlayer

c. Install to your site-packages directly from GitHub

Use the following if you just want to install the latest version of the library:

pip install git+https://github.com/waleedka/hiddenlayer.git
Comments
  • get_trace_graph private in Pytorch 1.4

    get_trace_graph private in Pytorch 1.4

    system: Python 3.7.5 Pytorch 1.4 torchvision 0.5

    code and error: import hiddenlayer as hl

    model= vgg16(pretrained=True).features hl.build_graph(model, torch.zeros([1,3,224,224]))

    " 69 # Run the Pytorch graph to get a trace and generate a graph from it ---> 70 trace, out = torch.jit.get_trace_graph(model, args) 71 torch.onnx._optimize_trace(trace, torch.onnx.OperatorExportTypes.ONNX) 72 torch_graph = trace.graph()"

    AttributeError: module 'torch.jit' has no attribute 'get_trace_graph'

    Context: Since Pytorch version 1.4 (this pull request) the function jit.get_trace_graph is private (jit._get_trace_graph).

    opened by nanohanno 13
  • 'torch._C.Value' object has no attribute 'uniqueName'

    'torch._C.Value' object has no attribute 'uniqueName'

    import torch
    import torchvision.models
    import hiddenlayer as hl
    # VGG16 with BatchNorm
    model = torchvision.models.vgg16()
    
    # Build HiddenLayer graph
    # Jupyter Notebook renders it automatically
    hl.build_graph(model, torch.zeros([1, 3, 224, 224]))
    

    python 3.6 torch 1.3.1 hl 0.2

    AttributeError Traceback (most recent call last) in 7 # Build HiddenLayer graph 8 # Jupyter Notebook renders it automatically ----> 9 hl.build_graph(model, torch.zeros([1, 3, 224, 224]))

    ~/anaconda3/envs/py36/lib/python3.6/site-packages/hiddenlayer-0.2-py3.6.egg/hiddenlayer/graph.py in build_graph(model, args, input_names, transforms, framework_transforms) 141 from .pytorch_builder import import_graph, FRAMEWORK_TRANSFORMS 142 assert args is not None, "Argument args must be provided for Pytorch models." --> 143 import_graph(g, model, args) 144 elif framework == "tensorflow": 145 from .tf_builder import import_graph, FRAMEWORK_TRANSFORMS

    ~/anaconda3/envs/py36/lib/python3.6/site-packages/hiddenlayer-0.2-py3.6.egg/hiddenlayer/pytorch_builder.py in import_graph(hl_graph, model, args, input_names, verbose) 88 shape = get_shape(torch_node) 89 # Add HL node ---> 90 hl_node = Node(uid=pytorch_id(torch_node), name=None, op=op, 91 output_shape=shape, params=params) 92 hl_graph.add_node(hl_node)

    ~/anaconda3/envs/py36/lib/python3.6/site-packages/hiddenlayer-0.2-py3.6.egg/hiddenlayer/pytorch_builder.py in pytorch_id(node) 43 # After ONNX simplification, the scopeName is not unique anymore 44 # so append node outputs to guarantee uniqueness ---> 45 return node.scopeName() + "/outputs/" + "/".join([o.uniqueName() for o in node.outputs()]) 46 47

    ~/anaconda3/envs/py36/lib/python3.6/site-packages/hiddenlayer-0.2-py3.6.egg/hiddenlayer/pytorch_builder.py in (.0) 43 # After ONNX simplification, the scopeName is not unique anymore 44 # so append node outputs to guarantee uniqueness ---> 45 return node.scopeName() + "/outputs/" + "/".join([o.uniqueName() for o in node.outputs()]) 46 47

    AttributeError: 'torch._C.Value' object has no attribute 'uniqueName'

    opened by woodg07 5
  • hiddenlayer cannot identify that my module is indeed a torch.nn.Module

    hiddenlayer cannot identify that my module is indeed a torch.nn.Module

    I have a personalised model class called let's say MyNet(Net), and it inherits from Net(nn.Module).

    When I call hl.build_graph(model, ...), hiddenlayer then raises the exception:

    • ValueError: model input param must be a PyTorch, TensorFlow, or Keras-with-TensorFlow-backend model.

    When I put everything inside only one class it works...

    opened by gmunizc 5
  • Bugfix/get trace graph

    Bugfix/get trace graph

    Builds on the earlier fix by jccurtis. This pull request fixes issue #66 . It makes hiddenlayer work with Pytorch 1.4, where get_trace_graph became private and ONNX formatting appears to have changed.

    opened by nanohanno 4
  • module 'torch.onnx' has no attribute 'OperatorExportTypes'

    module 'torch.onnx' has no attribute 'OperatorExportTypes'

    I run this code in Jupyter Notebook,but one error occurs: ` import torch

    import torchvision.models

    import hiddenlayer as hl

    model = torchvision.models.vgg16()

    hl.build_graph(model, torch.zeros([1, 3, 224, 224]))

    `

    AttributeError: module 'torch.onnx' has no attribute 'OperatorExportTypes'

    And I run the code under Ubuntu16.04, pytorch 0.4.0

    opened by geekac 4
  • module 'torch.jit' has no attribute '_get_trace_graph'

    module 'torch.jit' has no attribute '_get_trace_graph'

    Hi,thank you for your amazing job I try to use your work to visualize my own model When I run : #my model import hiddenlayer as hl hl.build_graph(model, torch.zeros([1, 3, 224, 224])) but I got: AttributeError: module 'torch.jit' has no attribute '_get_trace_graph' Could you tell me how can I slove it? Thank you

    opened by zhongqiu1245 3
  • "torch._C.Value has no attribute 'uniqueName'" Error running with PyTorch 1.2

    PyTorch Version: '1.2.0a' Python: 3.6.8 Exception has occurred: AttributeError 'torch._C.Value' object has no attribute 'uniqueName' File "hiddenlayer/hiddenlayer/pytorch_builder.py", line 45, in <listcomp> return node.scopeName() + "/outputs/" + "/".join([o.uniqueName() for o in node.outputs()]) File "hiddenlayer/hiddenlayer/pytorch_builder.py", line 45, in pytorch_id return node.scopeName() + "/outputs/" + "/".join([o.uniqueName() for o in node.outputs()]) File "hiddenlayer/hiddenlayer/pytorch_builder.py", line 90, in import_graph hl_node = Node(uid=pytorch_id(torch_node), name=None, op=op, File "hiddenlayer/hiddenlayer/graph.py", line 143, in build_graph import_graph(g, model, args) File "visualizer.py", line 20, in <module> graph = hl.build_graph(model, input)

    Works well with older version of PyTorch (0.4.1).

    opened by cted18 3
  • adaptive_avg_pool2d does not exist

    adaptive_avg_pool2d does not exist

    Hi, I met this problem when I want to visualize densenet, error shows below: c:\python35\lib\site-packages\torch\onnx\utils.py:446: UserWarning: ONNX export failed on ATen operator adaptive_avg_pool2d because torch.onnx.symbolic.adaptive_avg_pool2d does not exist .format(op_name, op_name)) Actually, there is no adaptive_avg_pool2d in my densenet, only nn.AvgPool2d() exists.

    opened by zhouyuangan 3
  • support yolov5

    support yolov5

    hello, I want to draw the graph of yolov5 by folowing code:

    import torch
    
    model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
    # print("==>> model: ", model)
    
    import hiddenlayer as hl
    graph = hl.build_graph(model, torch.zeros([1, 3, 512, 512]))
    

    but it reports error:

    c:\ProgramData\Anaconda3\lib\site-packages\hiddenlayer\pytorch_builder.py in import_graph(hl_graph, model, args, input_names, verbose)
         69     # Run the Pytorch graph to get a trace and generate a graph from it
         70     trace, out = torch.jit._get_trace_graph(model, args)
    ---> 71     torch_graph = torch.onnx._optimize_trace(trace, torch.onnx.OperatorExportTypes.ONNX)
    ...
    RuntimeError: Unsupported: ONNX export of index_put in opset 9. Please try opset version 11.
    

    some issues (https://github.com/onnx/onnx/issues/3057, https://github.com/pytorch/pytorch/issues/46237) say place opset_version=11 in torch.onnx.export(), but here i can not find torch.onnx.export(), so i don't know how to fix this error.

    opened by wwdok 2
  • How to display branch nodes / parallel blocks

    How to display branch nodes / parallel blocks

    I am confused about how to transform the following structure which has branches coming out from a node. This is what I tried hl.transforms.Fold("""Conv > LeakyRelu > Conv > Concat > LeakyRelu """, "Fire","FireBlock")

    image

    opened by talhaanwarch 2
  • TypeError: zeros(): argument 'out' (position 2) must be Tensor, not list

    TypeError: zeros(): argument 'out' (position 2) must be Tensor, not list

    I want to visualizer a model with 3-input and have problem with feed 3-input. This model have three input, this filgure: image

    I conferenced this ideal of Waleed (https://github.com/waleedka)and I visualier 3 patch with 3 input [ 32,6, 25, 25] [ 32,6, 51, 51],[32, 6, 75, 75] with code line:

    hl.build_graph(model, torch.zeros([32,6, 25, 25], [32,6, 51, 51],[32, 6, 75, 75]))

    1. But my code had the error: TypeError: zeros(): argument 'out' (position 2) must be Tensor, not list. How do I fix this problem? (I also tried many ways, example: #hl.build_graph(net, torch.zeros( 32,6, 25, 25), torch.zeros(32, 6, 51, 51), torch.zeros(32, 6, 75, 75)) #hl.build_graph(net, torch.zeros(( 32,6, 25, 25), ( 32,6, 51, 51), ( 32,6, 75, 75))) #hl.build_graph(net, torch.zeros([( 32,6, 25, 25)],[( 32,6, 51, 51)],[( 32,6, 75, 75)]) )

    In addition, I successed with the first path with code line: hl.build_graph(model, torch.zeros([32,6, 25, 25]). And had figure: image..................... 2) And question more about window size of model visualize. Can we show full model visualize one window or save it ? as we see these below picture, I must to stride many time in oder to take a my model.

    image image

    Thank you.

    opened by tphankr 2
  • How to plot bert model? (Transfomer models)

    How to plot bert model? (Transfomer models)

    I try to plot bert model using this package. But I unable to do it.

    Code:

    from transformers import AutoModel, AutoTokenizer
    model = AutoModel.from_pretrained("bert-base-uncased")
    tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
    inputs = tokenizer("Hello world!", return_tensors="pt")
    

    After that how to plot it?

    import hiddenlayer as hl
    hl.build_graph(model, inputs[0])
    
    opened by indramal 0
  • Error while trying to run the example

    Error while trying to run the example

    TypeError Traceback (most recent call last) in 4 # Build HiddenLayer graph 5 # Jupyter Notebook renders it automatically ----> 6 hl.build_graph(model, torch.zeros([1, 3, 224, 224]))

    3 frames /usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py in _optimize_graph(graph, operator_export_type, _disable_torch_constant_prop, fixed_batch_size, params_dict, dynamic_axes, input_names, module) 276 # Unpack quantized weights for conv and linear ops and insert into graph. 277 _C._jit_pass_onnx_unpack_quantized_weights( --> 278 graph, params_dict, symbolic_helper.is_caffe2_aten_fallback() 279 ) 280 if symbolic_helper.is_caffe2_aten_fallback():

    TypeError: _jit_pass_onnx_unpack_quantized_weights(): incompatible function arguments. The following argument types are supported: 1. (arg0: torch::jit::Graph, arg1: Dict[str, IValue], arg2: bool) -> Dict[str, IValue]

    Invoked with: graph(%input.1 : Float(1, 3, 224, 224, strides=[150528, 50176, 224, 1], requires_grad=0, device=cpu), %1 : Float(64, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=1, device=cpu), %2 : Float(64, strides=[1], requires_grad=1, device=cpu), %3 : Float(64, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=1, device=cpu), %4 : Float(64, strides=[1], requires_grad=1, device=cpu), %5 : Float(128, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=1, device=cpu), %6 : Float(128, strides=[1], requires_grad=1, device=cpu), %7 : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=1, device=cpu), %8 : Float(128, strides=[1], requires_grad=1, device=cpu), %9 : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=1, device=cpu), %10 : Float(256, strides=[1], requires_grad=1, device=cpu), %11 : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu), %12 : Float(256, strides=[1], requires_grad=1, device=cpu), %13 : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu), %14 : Float(256, strides=[1], requires_grad=1, device=cpu), %15 : Float(512, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu), %16 : Float(512, strides=[1], requires_grad=1, device=cpu), %17 : Float(512, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=1, device=cpu), %18 : Float(512, strides=[1], requires_grad=1, device=cpu), %19 : Float(512, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=1, device=cpu), %20 : Float(512, strides=[1], requires_grad=1, device=cpu), %21 : Float(512, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=1, device=cpu), %22 : Float(512, strides=[1], requires_grad=1, device=cpu), %23 : Float(512, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=1, device=cpu), %24 : Float(512, strides=[1], requires_grad=1, device=cpu), %25 : Float(512, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=1, device=cpu), %26 : Float(512, strides=[1], requires_grad=1, device=cpu), %27 : Float(4096, 25088, strides=[25088, 1], requires_grad=1, device=cpu), %28 : Float(4096, strides=[1], requires_grad=1, device=cpu), %29 : Float(4096, 4096, strides=[4096, 1], requires_grad=1, device=cpu), %30 : Float(4096, strides=[1], requires_grad=1, device=cpu), %31 : Float(1000, 4096, strides=[4096, 1], requires_grad=1, device=cpu), %32 : Float(1000, strides=[1], requires_grad=1, device=cpu)): %459 : int[] = prim::Constantvalue=[1, 1] %460 : int[] = prim::Constantvalue=[1, 1] %461 : int[] = prim::Constantvalue=[1, 1] %108 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %462 : int[] = prim::Constantvalue=[0, 0] %112 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %113 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %114 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %115 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %116 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.3 : Float(1, 64, 224, 224, strides=[3211264, 50176, 224, 1], requires_grad=0, device=cpu) = aten::_convolution(%input.1, %1, %2, %459, %460, %461, %108, %462, %112, %113, %114, %115, %116) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %532 : Float(1, 64, 224, 224, strides=[3211264, 50176, 224, 1], requires_grad=1, device=cpu) = aten::relu(%input.3) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %463 : int[] = prim::Constantvalue=[1, 1] %464 : int[] = prim::Constantvalue=[1, 1] %465 : int[] = prim::Constantvalue=[1, 1] %128 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %466 : int[] = prim::Constantvalue=[0, 0] %132 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %133 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %134 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %135 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %136 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.7 : Float(1, 64, 224, 224, strides=[3211264, 50176, 224, 1], requires_grad=0, device=cpu) = aten::_convolution(%532, %3, %4, %463, %464, %465, %128, %466, %132, %133, %134, %135, %136) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %533 : Float(1, 64, 224, 224, strides=[3211264, 50176, 224, 1], requires_grad=1, device=cpu) = aten::relu(%input.7) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %467 : int[] = prim::Constantvalue=[2, 2] %468 : int[] = prim::Constantvalue=[2, 2] %469 : int[] = prim::Constantvalue=[0, 0] %470 : int[] = prim::Constantvalue=[1, 1] %151 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:782:0 %input.9 : Float(1, 64, 112, 112, strides=[802816, 12544, 112, 1], requires_grad=1, device=cpu) = aten::max_pool2d(%533, %467, %468, %469, %470, %151) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:782:0 %471 : int[] = prim::Constantvalue=[1, 1] %472 : int[] = prim::Constantvalue=[1, 1] %473 : int[] = prim::Constantvalue=[1, 1] %162 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %474 : int[] = prim::Constantvalue=[0, 0] %166 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %167 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %168 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %169 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %170 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.11 : Float(1, 128, 112, 112, strides=[1605632, 12544, 112, 1], requires_grad=0, device=cpu) = aten::_convolution(%input.9, %5, %6, %471, %472, %473, %162, %474, %166, %167, %168, %169, %170) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %534 : Float(1, 128, 112, 112, strides=[1605632, 12544, 112, 1], requires_grad=1, device=cpu) = aten::relu(%input.11) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %475 : int[] = prim::Constantvalue=[1, 1] %476 : int[] = prim::Constantvalue=[1, 1] %477 : int[] = prim::Constantvalue=[1, 1] %182 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %478 : int[] = prim::Constantvalue=[0, 0] %186 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %187 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %188 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %189 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %190 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.15 : Float(1, 128, 112, 112, strides=[1605632, 12544, 112, 1], requires_grad=0, device=cpu) = aten::_convolution(%534, %7, %8, %475, %476, %477, %182, %478, %186, %187, %188, %189, %190) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %535 : Float(1, 128, 112, 112, strides=[1605632, 12544, 112, 1], requires_grad=1, device=cpu) = aten::relu(%input.15) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %479 : int[] = prim::Constantvalue=[2, 2] %480 : int[] = prim::Constantvalue=[2, 2] %481 : int[] = prim::Constantvalue=[0, 0] %482 : int[] = prim::Constantvalue=[1, 1] %205 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:782:0 %input.17 : Float(1, 128, 56, 56, strides=[401408, 3136, 56, 1], requires_grad=1, device=cpu) = aten::max_pool2d(%535, %479, %480, %481, %482, %205) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:782:0 %483 : int[] = prim::Constantvalue=[1, 1] %484 : int[] = prim::Constantvalue=[1, 1] %485 : int[] = prim::Constantvalue=[1, 1] %216 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %486 : int[] = prim::Constantvalue=[0, 0] %220 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %221 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %222 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %223 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %224 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.19 : Float(1, 256, 56, 56, strides=[802816, 3136, 56, 1], requires_grad=0, device=cpu) = aten::_convolution(%input.17, %9, %10, %483, %484, %485, %216, %486, %220, %221, %222, %223, %224) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %536 : Float(1, 256, 56, 56, strides=[802816, 3136, 56, 1], requires_grad=1, device=cpu) = aten::relu(%input.19) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %487 : int[] = prim::Constantvalue=[1, 1] %488 : int[] = prim::Constantvalue=[1, 1] %489 : int[] = prim::Constantvalue=[1, 1] %236 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %490 : int[] = prim::Constantvalue=[0, 0] %240 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %241 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %242 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %243 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %244 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.23 : Float(1, 256, 56, 56, strides=[802816, 3136, 56, 1], requires_grad=0, device=cpu) = aten::_convolution(%536, %11, %12, %487, %488, %489, %236, %490, %240, %241, %242, %243, %244) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %537 : Float(1, 256, 56, 56, strides=[802816, 3136, 56, 1], requires_grad=1, device=cpu) = aten::relu(%input.23) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %491 : int[] = prim::Constantvalue=[1, 1] %492 : int[] = prim::Constantvalue=[1, 1] %493 : int[] = prim::Constantvalue=[1, 1] %256 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %494 : int[] = prim::Constantvalue=[0, 0] %260 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %261 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %262 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %263 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %264 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.27 : Float(1, 256, 56, 56, strides=[802816, 3136, 56, 1], requires_grad=0, device=cpu) = aten::_convolution(%537, %13, %14, %491, %492, %493, %256, %494, %260, %261, %262, %263, %264) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %538 : Float(1, 256, 56, 56, strides=[802816, 3136, 56, 1], requires_grad=1, device=cpu) = aten::relu(%input.27) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %495 : int[] = prim::Constantvalue=[2, 2] %496 : int[] = prim::Constantvalue=[2, 2] %497 : int[] = prim::Constantvalue=[0, 0] %498 : int[] = prim::Constantvalue=[1, 1] %279 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:782:0 %input.29 : Float(1, 256, 28, 28, strides=[200704, 784, 28, 1], requires_grad=1, device=cpu) = aten::max_pool2d(%538, %495, %496, %497, %498, %279) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:782:0 %499 : int[] = prim::Constantvalue=[1, 1] %500 : int[] = prim::Constantvalue=[1, 1] %501 : int[] = prim::Constantvalue=[1, 1] %290 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %502 : int[] = prim::Constantvalue=[0, 0] %294 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %295 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %296 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %297 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %298 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.31 : Float(1, 512, 28, 28, strides=[401408, 784, 28, 1], requires_grad=0, device=cpu) = aten::_convolution(%input.29, %15, %16, %499, %500, %501, %290, %502, %294, %295, %296, %297, %298) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %539 : Float(1, 512, 28, 28, strides=[401408, 784, 28, 1], requires_grad=1, device=cpu) = aten::relu(%input.31) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %503 : int[] = prim::Constantvalue=[1, 1] %504 : int[] = prim::Constantvalue=[1, 1] %505 : int[] = prim::Constantvalue=[1, 1] %310 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %506 : int[] = prim::Constantvalue=[0, 0] %314 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %315 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %316 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %317 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %318 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.35 : Float(1, 512, 28, 28, strides=[401408, 784, 28, 1], requires_grad=0, device=cpu) = aten::_convolution(%539, %17, %18, %503, %504, %505, %310, %506, %314, %315, %316, %317, %318) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %540 : Float(1, 512, 28, 28, strides=[401408, 784, 28, 1], requires_grad=1, device=cpu) = aten::relu(%input.35) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %507 : int[] = prim::Constantvalue=[1, 1] %508 : int[] = prim::Constantvalue=[1, 1] %509 : int[] = prim::Constantvalue=[1, 1] %330 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %510 : int[] = prim::Constantvalue=[0, 0] %334 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %335 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %336 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %337 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %338 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.39 : Float(1, 512, 28, 28, strides=[401408, 784, 28, 1], requires_grad=0, device=cpu) = aten::_convolution(%540, %19, %20, %507, %508, %509, %330, %510, %334, %335, %336, %337, %338) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %541 : Float(1, 512, 28, 28, strides=[401408, 784, 28, 1], requires_grad=1, device=cpu) = aten::relu(%input.39) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %511 : int[] = prim::Constantvalue=[2, 2] %512 : int[] = prim::Constantvalue=[2, 2] %513 : int[] = prim::Constantvalue=[0, 0] %514 : int[] = prim::Constantvalue=[1, 1] %353 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:782:0 %input.41 : Float(1, 512, 14, 14, strides=[100352, 196, 14, 1], requires_grad=1, device=cpu) = aten::max_pool2d(%541, %511, %512, %513, %514, %353) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:782:0 %515 : int[] = prim::Constantvalue=[1, 1] %516 : int[] = prim::Constantvalue=[1, 1] %517 : int[] = prim::Constantvalue=[1, 1] %364 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %518 : int[] = prim::Constantvalue=[0, 0] %368 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %369 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %370 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %371 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %372 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.43 : Float(1, 512, 14, 14, strides=[100352, 196, 14, 1], requires_grad=0, device=cpu) = aten::_convolution(%input.41, %21, %22, %515, %516, %517, %364, %518, %368, %369, %370, %371, %372) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %542 : Float(1, 512, 14, 14, strides=[100352, 196, 14, 1], requires_grad=1, device=cpu) = aten::relu(%input.43) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %519 : int[] = prim::Constantvalue=[1, 1] %520 : int[] = prim::Constantvalue=[1, 1] %521 : int[] = prim::Constantvalue=[1, 1] %384 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %522 : int[] = prim::Constantvalue=[0, 0] %388 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %389 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %390 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %391 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %392 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.47 : Float(1, 512, 14, 14, strides=[100352, 196, 14, 1], requires_grad=0, device=cpu) = aten::_convolution(%542, %23, %24, %519, %520, %521, %384, %522, %388, %389, %390, %391, %392) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %543 : Float(1, 512, 14, 14, strides=[100352, 196, 14, 1], requires_grad=1, device=cpu) = aten::relu(%input.47) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %523 : int[] = prim::Constantvalue=[1, 1] %524 : int[] = prim::Constantvalue=[1, 1] %525 : int[] = prim::Constantvalue=[1, 1] %404 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %526 : int[] = prim::Constantvalue=[0, 0] %408 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %409 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %410 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %411 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %412 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %input.51 : Float(1, 512, 14, 14, strides=[100352, 196, 14, 1], requires_grad=0, device=cpu) = aten::_convolution(%543, %25, %26, %523, %524, %525, %404, %526, %408, %409, %410, %411, %412) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py:454:0 %544 : Float(1, 512, 14, 14, strides=[100352, 196, 14, 1], requires_grad=1, device=cpu) = aten::relu(%input.51) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %527 : int[] = prim::Constantvalue=[2, 2] %528 : int[] = prim::Constantvalue=[2, 2] %529 : int[] = prim::Constantvalue=[0, 0] %530 : int[] = prim::Constantvalue=[1, 1] %427 : bool = prim::Constantvalue=0 # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:782:0 %input.53 : Float(1, 512, 7, 7, strides=[25088, 49, 7, 1], requires_grad=1, device=cpu) = aten::max_pool2d(%544, %527, %528, %529, %530, %427) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:782:0 %531 : int[] = prim::Constantvalue=[7, 7] %444 : Float(1, 512, 7, 7, strides=[25088, 49, 7, 1], requires_grad=1, device=cpu) = aten::adaptive_avg_pool2d(%input.53, %531) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1214:0 %445 : int = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torchvision/models/vgg.py:68:0 %446 : int = prim::Constantvalue=-1 # /usr/local/lib/python3.7/dist-packages/torchvision/models/vgg.py:68:0 %447 : Float(1, 25088, strides=[25088, 1], requires_grad=1, device=cpu) = aten::flatten(%444, %445, %446) # /usr/local/lib/python3.7/dist-packages/torchvision/models/vgg.py:68:0 %input.55 : Float(1, 4096, strides=[4096, 1], requires_grad=1, device=cpu) = aten::linear(%447, %27, %28) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py:114:0 %545 : Float(1, 4096, strides=[4096, 1], requires_grad=1, device=cpu) = aten::relu(%input.55) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %450 : float = prim::Constantvalue=0.5 # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1252:0 %451 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1252:0 %452 : Float(1, 4096, strides=[4096, 1], requires_grad=1, device=cpu) = aten::dropout(%545, %450, %451) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1252:0 %input.59 : Float(1, 4096, strides=[4096, 1], requires_grad=1, device=cpu) = aten::linear(%452, %29, %30) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py:114:0 %546 : Float(1, 4096, strides=[4096, 1], requires_grad=1, device=cpu) = aten::relu(%input.59) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1455:0 %455 : float = prim::Constantvalue=0.5 # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1252:0 %456 : bool = prim::Constantvalue=1 # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1252:0 %457 : Float(1, 4096, strides=[4096, 1], requires_grad=1, device=cpu) = aten::dropout(%546, %455, %456) # /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1252:0 %458 : Float(1, 1000, strides=[1000, 1], requires_grad=1, device=cpu) = aten::linear(%457, %31, %32) # /usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py:114:0 return (%458) , None, False

    opened by Junaid199f 0
  • Error when trying to print the Model

    Error when trying to print the Model

    Hello thank you for sharing your lib.

    I m running into an error: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor`

    When I shift it to the CPU: hl.build_graph(self.online_net.to('cpu'), torch.zeros([1, 4, 32, 32]))

    I m getting RuntimeError: Expected node type 'onnx::Constant' for argument 'rounding_mode' of node '_div_rounding_mode', got 'prim::Param'.

    I will also print my code. Maybe somebody can figure out why its not working.

    # -*- coding: utf-8 -*-
    from __future__ import division
    import math
    import torch
    from torch import nn
    from torch.nn import functional as F
    import functools
    import operator
    from torch.nn.utils import spectral_norm
    
    
    # Factorised NoisyLinear layer with bias
    class NoisyLinear(nn.Module):
      def __init__(self, in_features, out_features, std_init=0.3):
        super(NoisyLinear, self).__init__()
        self.in_features = in_features
        self.out_features = out_features
        self.std_init = std_init
        self.weight_mu = nn.Parameter(torch.empty(out_features, in_features))
        self.weight_sigma = nn.Parameter(torch.empty(out_features, in_features))
        self.register_buffer('weight_epsilon', torch.empty(out_features, in_features))
        self.bias_mu = nn.Parameter(torch.empty(out_features))
        self.bias_sigma = nn.Parameter(torch.empty(out_features))
        self.register_buffer('bias_epsilon', torch.empty(out_features))
        self.reset_parameters()
        self.reset_noise()
    
      def reset_parameters(self):
        mu_range = 1 / math.sqrt(self.in_features)
        self.weight_mu.data.uniform_(-mu_range, mu_range)
        self.weight_sigma.data.fill_(self.std_init / math.sqrt(self.in_features))
        self.bias_mu.data.uniform_(-mu_range, mu_range)
        self.bias_sigma.data.fill_(self.std_init / math.sqrt(self.out_features))
    
      def _scale_noise(self, size):
        x = torch.randn(size, device=self.weight_mu.device)
        return x.sign().mul_(x.abs().sqrt_())
    
      def reset_noise(self):
        epsilon_in = self._scale_noise(self.in_features)
        epsilon_out = self._scale_noise(self.out_features)
        self.weight_epsilon.copy_(epsilon_out.ger(epsilon_in))
        self.bias_epsilon.copy_(epsilon_out)
    
      def forward(self, input):
        if self.training:
          return F.linear(input, self.weight_mu + self.weight_sigma * self.weight_epsilon, self.bias_mu + self.bias_sigma * self.bias_epsilon)
        else:
          return F.linear(input, self.weight_mu, self.bias_mu)
    
    
    
    class ResBlock(nn.Module):
        def __init__(self, in_channels, out_channels, downsample):
            super().__init__()
            if downsample:
                self.conv1 = nn.Conv2d(
                    in_channels, out_channels, kernel_size=3, stride=2, padding=1)
                self.shortcut = nn.Sequential(
    
                    nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=2)#,
                    #nn.BatchNorm2d(out_channels)
    
                )
            else:
                self.conv1 = nn.Conv2d(
                    in_channels, out_channels, kernel_size=3, stride=1, padding=1)
                self.shortcut = nn.Sequential()
    
            self.conv2 = nn.Conv2d(out_channels, out_channels,
                                   kernel_size=3, stride=1, padding=1)
    
            #self.bn1 = nn.BatchNorm2d(out_channels)
            #self.bn2 = nn.BatchNorm2d(out_channels)
    
        def forward(self, input):
            shortcut = self.shortcut(input)
            input = nn.ReLU()(self.conv1(input))
            input = nn.ReLU()(self.conv2(input))
            input = input + shortcut
            return nn.ReLU()(input)
    
    
    class Rainbow_ResNet(nn.Module):
      def __init__(self, args, action_space, resblock, repeat):
        super(Rainbow_ResNet, self).__init__()
        self.atoms = args.atoms
        self.action_space = action_space
    
        filters = [128, 128, 256, 512, 1024]
        self.layer0 = nn.Sequential(
          nn.Conv2d(4, 128, kernel_size=5, stride=1, padding=1),
          #nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
          #nn.BatchNorm2d(64),
    
          nn.ReLU())
    
        self.layer1 = nn.Sequential()
        self.layer1.add_module('conv2_1', ResBlock(filters[0], filters[1], downsample=True))
        for i in range(1, repeat[0]):
                self.layer1.add_module('conv2_%d'%(i+1,), ResBlock(filters[1], filters[1], downsample=False))
    
        self.layer2 = nn.Sequential()
    
        self.layer2.add_module('conv3_1', ResBlock(filters[1], filters[2], downsample=True))
    
        for i in range(1, repeat[1]):
                self.layer2.add_module('conv3_%d' % (
                    i+1,), ResBlock(filters[2], filters[2], downsample=False))
    
    
        #self.layer3 = nn.Sequential()
        #self.layer3.add_module('conv4_1', ResBlock(filters[2], filters[3], downsample=True))
        #for i in range(1, repeat[2]):
        #    self.layer3.add_module('conv4_%d' % (
        #        i+1,), ResBlock(filters[3], filters[3], downsample=False))
    
        #self.layer4 = nn.Sequential()
        #self.layer4.add_module('conv5_1', ResBlock(filters[3], filters[4], downsample=True))
        #for i in range(1, repeat[3]):
        #    self.layer4.add_module('conv5_%d'%(i+1,),ResBlock(filters[4], filters[4], downsample=False))
    
        #self.dense = nn.Sequential(spectral_norm(nn.Linear(12544, 1024)), nn.ReLU())
        self.fc_h_v = spectral_norm(nn.Linear(16384, 512))
        self.fc_h_a = spectral_norm(nn.Linear(16384, 512))
    
        self.fc_z_v = NoisyLinear(512, self.atoms, std_init=args.noisy_std)
        self.fc_z_a = NoisyLinear(512, action_space * self.atoms, std_init=args.noisy_std) 
    
      def forward(self, x, log=False):
        input = self.layer0(x)
        input = self.layer1(input)
        input = self.layer2(input)
    
        #input = self.layer3(input)
        #input = self.layer4(input)
        input = torch.flatten(input, start_dim=1)
        #input = self.dense(input)
    
        v_uuv = self.fc_z_v(F.relu(self.fc_h_v(input)))  # Value stream
        a_uuv = self.fc_z_a(F.relu(self.fc_h_a(input)))  # Advantage stream
    
        #v_uav, a_uav = v_uav.view(-1, 1, self.atoms), a_uav.view(-1, self.action_space, self.atoms)    
        v_uuv, a_uuv = v_uuv.view(-1, 1, self.atoms), a_uuv.view(-1, self.action_space, self.atoms)
        
        #q_uav = v_uav + a_uav - a_uav.mean(1, keepdim=True)  # Combine streams
        q_uuv = v_uuv + a_uuv - a_uuv.mean(1, keepdim=True)  # Combine streams
    
        if log:  # Use log softmax for numerical stability
          #q_uav = F.log_softmax(q_uav, dim=2)  # Log probabilities with action over second dimension
          q_uuv = F.log_softmax(q_uuv, dim=2)  # Log probabilities with action over second dimension
        else:
          #q_uav = F.softmax(q_uav, dim=2)  # Probabilities with action over second dimension
          q_uuv = F.softmax(q_uuv, dim=2)  # Probabilities with action over second dimension
        return  q_uuv #q_uav,
      def reset_noise(self):
        for name, module in self.named_children():
          if 'fc_z' in name:
            module.reset_noise()
    
    
    
      def reset_noise(self):
        for name, module in self.named_children():
          if 'fc_z' in name:
            module.reset_noise()
    
    opened by Mateus224 0
  • TypeError for Pytorch Model

    TypeError for Pytorch Model

    Hello, I am trying to use hiddenlayer to draw a pytorch model, I got some error coming out of onnx

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    /home/ubuntu/mstar/scripts/rlfh/visualization.ipynb Cell 4' in <cell line: 13>()
          [6](vscode-notebook-cell://ssh-remote%2Bmistd/home/ubuntu/mstar/scripts/rlfh/visualization.ipynb#ch0000003vscode-remote?line=5) #model = torchvision.models.vgg16()
          [8](vscode-notebook-cell://ssh-remote%2Bmistd/home/ubuntu/mstar/scripts/rlfh/visualization.ipynb#ch0000003vscode-remote?line=7) model = torch.nn.Sequential(
          [9](vscode-notebook-cell://ssh-remote%2Bmistd/home/ubuntu/mstar/scripts/rlfh/visualization.ipynb#ch0000003vscode-remote?line=8)     nn.Linear(10, 10),
         [10](vscode-notebook-cell://ssh-remote%2Bmistd/home/ubuntu/mstar/scripts/rlfh/visualization.ipynb#ch0000003vscode-remote?line=9)     nn.Linear(10, 2)
         [11](vscode-notebook-cell://ssh-remote%2Bmistd/home/ubuntu/mstar/scripts/rlfh/visualization.ipynb#ch0000003vscode-remote?line=10) )
    ---> [13](vscode-notebook-cell://ssh-remote%2Bmistd/home/ubuntu/mstar/scripts/rlfh/visualization.ipynb#ch0000003vscode-remote?line=12) hl.build_graph(model, torch.zeros([1, 10]))
    
    File ~/py38/lib/python3.8/site-packages/hiddenlayer/graph.py:143, in build_graph(model, args, input_names, transforms, framework_transforms)
        [141](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/graph.py?line=140)     from .pytorch_builder import import_graph, FRAMEWORK_TRANSFORMS
        [142](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/graph.py?line=141)     assert args is not None, "Argument args must be provided for Pytorch models."
    --> [143](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/graph.py?line=142)     import_graph(g, model, args)
        [144](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/graph.py?line=143) elif framework == "tensorflow":
        [145](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/graph.py?line=144)     from .tf_builder import import_graph, FRAMEWORK_TRANSFORMS
    
    File ~/py38/lib/python3.8/site-packages/hiddenlayer/pytorch_builder.py:71, in import_graph(hl_graph, model, args, input_names, verbose)
         [66](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/pytorch_builder.py?line=65) def import_graph(hl_graph, model, args, input_names=None, verbose=False):
         [67](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/pytorch_builder.py?line=66)     # TODO: add input names to graph
         [68](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/pytorch_builder.py?line=67) 
         [69](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/pytorch_builder.py?line=68)     # Run the Pytorch graph to get a trace and generate a graph from it
         [70](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/pytorch_builder.py?line=69)     trace, out = torch.jit._get_trace_graph(model, args)
    ---> [71](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/pytorch_builder.py?line=70)     torch_graph = torch.onnx._optimize_trace(trace, torch.onnx.OperatorExportTypes.ONNX)
         [73](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/pytorch_builder.py?line=72)     # Dump list of nodes (DEBUG only)
         [74](file:///home/ubuntu/py38/lib/python3.8/site-packages/hiddenlayer/pytorch_builder.py?line=73)     if verbose:
    
    File ~/py38/lib/python3.8/site-packages/torch/onnx/__init__.py:394, in _optimize_trace(graph, operator_export_type)
        [391](file:///home/ubuntu/py38/lib/python3.8/site-packages/torch/onnx/__init__.py?line=390) def _optimize_trace(graph, operator_export_type):
        [392](file:///home/ubuntu/py38/lib/python3.8/site-packages/torch/onnx/__init__.py?line=391)     from torch.onnx import utils
    --> [394](file:///home/ubuntu/py38/lib/python3.8/site-packages/torch/onnx/__init__.py?line=393)     return utils._optimize_graph(graph, operator_export_type)
    
    File ~/py38/lib/python3.8/site-packages/torch/onnx/utils.py:276, in _optimize_graph(graph, operator_export_type, _disable_torch_constant_prop, fixed_batch_size, params_dict, dynamic_axes, input_names, module)
        [274](file:///home/ubuntu/py38/lib/python3.8/site-packages/torch/onnx/utils.py?line=273) symbolic_helper._quantized_ops.clear()
        [275](file:///home/ubuntu/py38/lib/python3.8/site-packages/torch/onnx/utils.py?line=274) # Unpack quantized weights for conv and linear ops and insert into graph.
    --> [276](file:///home/ubuntu/py38/lib/python3.8/site-packages/torch/onnx/utils.py?line=275) _C._jit_pass_onnx_unpack_quantized_weights(
        [277](file:///home/ubuntu/py38/lib/python3.8/site-packages/torch/onnx/utils.py?line=276)     graph, params_dict, symbolic_helper.is_caffe2_aten_fallback()
        [278](file:///home/ubuntu/py38/lib/python3.8/site-packages/torch/onnx/utils.py?line=277) )
        [279](file:///home/ubuntu/py38/lib/python3.8/site-packages/torch/onnx/utils.py?line=278) if symbolic_helper.is_caffe2_aten_fallback():
        [280](file:///home/ubuntu/py38/lib/python3.8/site-packages/torch/onnx/utils.py?line=279)     # Insert permutes before and after each conv op to ensure correct order.
        [281](file:///home/ubuntu/py38/lib/python3.8/site-packages/torch/onnx/utils.py?line=280)     _C._jit_pass_onnx_quantization_insert_permutes(graph, params_dict)
    
    TypeError: _jit_pass_onnx_unpack_quantized_weights(): incompatible function arguments. The following argument types are supported:
        1. (arg0: torch::jit::Graph, arg1: Dict[str, IValue], arg2: bool) -> Dict[str, IValue]
    
    Invoked with: graph(%0 : Float(1, 10, strides=[10, 1], requires_grad=0, device=cpu),
          %1 : Float(10, 10, strides=[10, 1], requires_grad=1, device=cpu),
          %2 : Float(10, strides=[1], requires_grad=1, device=cpu),
          %3 : Float(2, 10, strides=[10, 1], requires_grad=1, device=cpu),
          %4 : Float(2, strides=[1], requires_grad=1, device=cpu)):
      %15 : Float(1, 10, strides=[10, 1], requires_grad=1, device=cpu) = aten::linear(%0, %1, %2) # /home/ubuntu/py38/lib/python3.8/site-packages/torch/nn/modules/linear.py:114:0
      %16 : Float(1, 2, strides=[2, 1], requires_grad=1, device=cpu) = aten::linear(%15, %3, %4) # /home/ubuntu/py38/lib/python3.8/site-packages/torch/nn/modules/linear.py:114:0
      return (%16)
    , None, False
    

    runtime:

    ubuntu 20.04, python 3.8, torch 1.13.0 (experimental), hiddenlayer 0.3

    script to reproduce the error:

    import hiddenlayer as hl
    import torch
    import torch.nn as nn
    import torchvision.models
    
    #model = torchvision.models.vgg16()
    
    model = torch.nn.Sequential(
        nn.Linear(10, 10),
        nn.Linear(10, 2)
    )
    
    hl.build_graph(model, torch.zeros([1, 10]))
    
    opened by hsl89 4
  • How to display dimensions and change font to Times

    How to display dimensions and change font to Times

    Good day! I just discovered this amazing package as I was trying to visualize the model that I made. May I know how can I display the dimensions (e.g., 100x3, 1x200x3, etc.) and also how can I make the font to be Times New Roman since I'll be transferring it within an academic paper. Thank you! Much power!

    opened by egmaminta 1
Releases(v0.2)
Owner
Waleed
Deep learning, Computer Vision, PyTorch, Tensorflow, Web development, Python.
Waleed
The official implementation of the Hybrid Self-Attention NEAT algorithm

PUREPLES - Pure Python Library for ES-HyperNEAT About This is a library of evolutionary algorithms with a focus on neuroevolution, implemented in pure

Adrian Westh 91 Dec 12, 2022
A Light CNN for Deep Face Representation with Noisy Labels

A Light CNN for Deep Face Representation with Noisy Labels Citation If you use our models, please cite the following paper: @article{wulight, title=

Alfred Xiang Wu 715 Nov 05, 2022
Code for AA-RMVSNet: Adaptive Aggregation Recurrent Multi-view Stereo Network (ICCV 2021).

AA-RMVSNet Code for AA-RMVSNet: Adaptive Aggregation Recurrent Multi-view Stereo Network (ICCV 2021) in PyTorch. paper link: arXiv | CVF Change Log Ju

Qingtian Zhu 97 Dec 30, 2022
Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer

Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer This repository contains the PyTorch code for Evo-ViT. This work proposes a slow-fas

YifanXu 53 Dec 05, 2022
NovelD: A Simple yet Effective Exploration Criterion

NovelD: A Simple yet Effective Exploration Criterion Intro This is an implementation of the method proposed in NovelD: A Simple yet Effective Explorat

29 Dec 05, 2022
Testability-Aware Low Power Controller Design with Evolutionary Learning, ITC2021

Testability-Aware Low Power Controller Design with Evolutionary Learning This repo contains the source code of Testability-Aware Low Power Controller

Lee Man 1 Dec 26, 2021
Official PyTorch implementation and pretrained models of the paper Self-Supervised Classification Network

Self-Classifier: Self-Supervised Classification Network Official PyTorch implementation and pretrained models of the paper Self-Supervised Classificat

Elad Amrani 24 Dec 21, 2022
Pytorch implementation of "Neural Wireframe Renderer: Learning Wireframe to Image Translations"

Neural Wireframe Renderer: Learning Wireframe to Image Translations Pytorch implementation of ideas from the paper Neural Wireframe Renderer: Learning

Yuan Xue 7 Nov 14, 2022
Space-event-trace - Tracing service for spaceteam events

space-event-trace Tracing service for TU Wien Spaceteam events. This service is

TU Wien Space Team 2 Jan 04, 2022
Code accompanying the paper Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs (Chen et al., CVPR 2020, Oral).

Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs This repository contains PyTorch implementation of our pa

Shizhe Chen 178 Dec 29, 2022
SatelliteNeRF - PyTorch-based Neural Radiance Fields adapted to satellite domain

SatelliteNeRF PyTorch-based Neural Radiance Fields adapted to satellite domain.

Kai Zhang 46 Nov 20, 2022
[NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

Efficient Graph Similarity Computation - (EGSC) This repo contains the source code and dataset for our paper: Slow Learning and Fast Inference: Effici

24 Dec 31, 2022
This repository is dedicated to developing and maintaining code for experiments with wide neural networks.

Wide-Networks This repository contains the code of various experiments on wide neural networks. In particular, we implement classes for abc-parameteri

Karl Hajjar 0 Nov 02, 2021
SGPT: Multi-billion parameter models for semantic search

SGPT: Multi-billion parameter models for semantic search This repository contains code, results and pre-trained models for the paper SGPT: Multi-billi

Niklas Muennighoff 182 Dec 29, 2022
Speckle-free Holography with Partially Coherent Light Sources and Camera-in-the-loop Calibration

Speckle-free Holography with Partially Coherent Light Sources and Camera-in-the-loop Calibration Project Page | Paper Yifan Peng*, Suyeon Choi*, Jongh

Stanford Computational Imaging Lab 19 Dec 11, 2022
202 Jan 06, 2023
A simple configurable bot for sending arXiv article alert by mail

arXiv-newsletter A simple configurable bot for sending arXiv article alert by mail. Prerequisites PyYAML=5.3.1 arxiv=1.4.0 Configuration All config

SXKDZ 21 Nov 09, 2022
CVPR2021 Workshop - HDRUNet: Single Image HDR Reconstruction with Denoising and Dequantization.

HDRUNet [Paper Link] HDRUNet: Single Image HDR Reconstruction with Denoising and Dequantization By Xiangyu Chen, Yihao Liu, Zhengwen Zhang, Yu Qiao an

XyChen 105 Dec 20, 2022
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)

Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching Official pytorch implementation of "Show, Attend and Distill: Kn

Clova AI Research 80 Dec 16, 2022