CNN visualization tool in TensorFlow

Overview

tf_cnnvis

DOI

A blog post describing the library: https://medium.com/@falaktheoptimist/want-to-look-inside-your-cnn-we-have-just-the-right-tool-for-you-ad1e25b30d90

tf_cnnvis is a CNN visualization library which you can use to better understand your own CNNs. We use the TensorFlow library at the backend and the generated images are displayed in TensorBoard. We have implemented 2 CNN visualization techniques so far:

  1. Based on the paper Visualizing and Understanding Convolutional Networks by Matthew D. Zeiler and Rob Fergus. The goal here is to reconstruct the input image from the information contained in any given layers of the convolutional neural network. Here are a few examples

Figure 1: Original image and the reconstructed versions from maxpool layer 1,2 and 3 of Alexnet generated using tf_cnnvis.

  1. CNN visualization based on Deep dream by Google. Here's the relevant blog post explaining the technique. In essence, it attempts to construct an input image that maximizes the activation for a given output. We present some samples below:
Carbonara Ibex Elephant Ostrich
Cheese burger Tennis ball Fountain pen Clock tower
Cauliflower Baby Milk bottle Sea lion Dolphin

Requirements:

  • Tensorflow (>= 1.8)
  • numpy
  • scipy
  • h5py
  • wget
  • Pillow
  • six
  • scikit-image

If you are using pip you can install these with

pip install tensorflow numpy scipy h5py wget Pillow six scikit-image

Setup script

Clone the repository

#!bash

git clone https://github.com/InFoCusp/tf_cnnvis.git

And run

#!bash
sudo pip install setuptools
sudo pip install six
sudo python setup.py install
sudo python setup.py clean

Citation

If you use this library in your work, please cite

  @misc{tf_cnnvis,
    author = {Bhagyesh Vikani, Falak Shah},
    title = {CNN Visualization},
    year = {2017},
    howpublished = {\url{https://github.com/InFoCusp/tf_cnnvis/}},
    doi = {10.5281/zenodo.2594491}
  }

API

tf_cnnvis.activation_visualization(graph_or_path, value_feed_dict, input_tensor=None, layers='r', path_logdir='./Log', path_outdir='./Output')

The function to generate the activation visualizations of the input image at the given layer.

Parameters

  • graph_or_path (tf.Graph object or String) – TF graph or [Path-to-saved-graph] as String containing the CNN.

  • value_feed_dict (dict) – Values of placeholders to feed while evaluating the graph

    • dict : {placeholder1 : value1, ...}
  • input_tensor (tf.tensor object (Default = None)) – tf.tensor (input tensor to the model - where images enter into the models) Note: This is not a standalone tensor/placeholder separate from the model

  • layers (list or String (Default = 'r')) –

    • layerName : Reconstruction from a layer specified by name
    • ‘r’ : Reconstruction from all the relu layers
    • ‘p’ : Reconstruction from all the pooling layers
    • ‘c’ : Reconstruction from all the convolutional layers
  • path_outdir (String (Default = "./Output")) – [path-to-dir] to save results into disk as images

  • path_logdir (String (Default = "./Log")) – [path-to-log-dir] to make log file for TensorBoard visualization

Returns

  • is_success (boolean) – True if the function ran successfully. False otherwise

tf_cnnvis.deconv_visualization(graph_or_path, value_feed_dict, input_tensor=None, layers='r', path_logdir='./Log', path_outdir='./Output')

The function to generate the visualizations of the input image reconstructed from the feature maps of a given layer.

Parameters

  • graph_or_path (tf.Graph object or String) – TF graph or [Path-to-saved-graph] as String containing the CNN.

  • value_feed_dict (dict) – Values of placeholders to feed while evaluating the graph

    • dict : {placeholder1 : value1, ...}
  • input_tensor (tf.tensor object (Default = None)) – tf.tensor (input tensor to the model - where images enter into the models) Note: This is not a standalone tensor/placeholder separate from the model

  • layers (list or String (Default = 'r')) –

    • layerName : Reconstruction from a layer specified by name
    • ‘r’ : Reconstruction from all the relu layers
    • ‘p’ : Reconstruction from all the pooling layers
    • ‘c’ : Reconstruction from all the convolutional layers
  • path_outdir (String (Default = "./Output")) – [path-to-dir] to save results into disk as images

  • path_logdir (String (Default = "./Log")) – [path-to-log-dir] to make log file for TensorBoard visualization

Returns

  • is_success (boolean) – True if the function ran successfully. False otherwise

tf_cnnvis.deepdream_visualization(graph_or_path, value_feed_dict, layer, classes, input_tensor=None, path_logdir='./Log', path_outdir='./Output')

The function to generate the visualizations of the input image reconstructed from the feature maps of a given layer.

Parameters

  • graph_or_path (tf.Graph object or String) – TF graph or [Path-to-saved-graph] as String containing the CNN.

  • value_feed_dict (dict) – Values of placeholders to feed while evaluating the graph

    • dict : {placeholder1 : value1, ...}
  • layer (String) - name of a layer in TF graph

  • classes (List) - list featuremap index for the class classification layer

  • input_tensor (tf.tensor object (Default = None)) – tf.tensor (input tensor to the model - where images enter into the models) Note: This is not a standalone tensor/placeholder separate from the model

  • path_outdir (String (Default = "./Output")) – [path-to-dir] to save results into disk as images

  • path_logdir (String (Default = "./Log")) – [path-to-log-dir] to make log file for TensorBoard visualization

Returns

  • is_success (boolean) – True if the function ran successfully. False otherwise

To visualize in TensorBoard

To start Tensorflow, run the following command on the console

#!bash

tensorboard --logdir=./Log

and on the TensorBoard homepage look under the Images tab

Additional helper functions

tf_cnnvis.utils.image_normalization(image, ubound=255.0, epsilon=1e-07)

Performs Min-Max image normalization. Transforms the pixel intensity values to range [0, ubound]

Parameters

  • image (3-D numpy array) – A numpy array to normalize
  • ubound (float (Default = 255.0)) – upperbound for a image pixel value

Returns

  • norm_image (3-D numpy array) – The normalized image

tf_cnnvis.utils.convert_into_grid(Xs, padding=1, ubound=255.0)

Convert 4-D numpy array into a grid of images for display

Parameters

  • Xs (4-D numpy array (first axis contations an image)) – The 4D array of images to put onto grid
  • padding (int (Default = 1)) – Spacing between grid cells
  • ubound (float (Default = 255.0)) – upperbound for a image pixel value

Returns

  • (3-D numpy array) – A grid of input images
Comments
  • Fetch argument None has invalid type <class 'NoneType'>

    Fetch argument None has invalid type

    Hi. I'm having a lot of trouble trying to get your function to work. I'm getting an error now that looks like it is coming from your deconvolution functions. Could you please let me know what you think the solution is?

    Code I used:

    is_loaded=tf_cnnvis.deconv_visualization(graph_or_path=tf.get_default_graph(),
                                             value_feed_dict={x_pl:x_batch, y_gt:targets_batch, is_training:False, valid_eval_accs:1000.0, valid_xent:1000.0},
                                             layers='r',
                                             path_logdir="c:/p17/logs/tf_cnnvis/flowers/1/logs",
                                             path_outdir="c:/p17/logs/tf_cnnvis/flowers/1/out")
    

    Thanks.

    INFO:tensorflow:Restoring parameters from model\tmp-model
    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-13-41761d870143> in <module>()
          3                                          layers='r',
          4                                          path_logdir="c:/p17/logs/tf_cnnvis/flowers/1/logs",
    ----> 5                                          path_outdir="c:/p17/logs/tf_cnnvis/flowers/1/out")
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tf_cnnvis-1.0.0-py3.5.egg\tf_cnnvis\tf_cnnvis.py in deconv_visualization(graph_or_path, value_feed_dict, input_tensor, layers, path_logdir, path_outdir)
        381 def deconv_visualization(graph_or_path, value_feed_dict, input_tensor = None, layers = 'r', path_logdir = './Log', path_outdir = "./Output"):
        382 	is_success = _get_visualization(graph_or_path, value_feed_dict, input_tensor = input_tensor, layers = layers, method = "deconv", 
    --> 383 		path_logdir = path_logdir, path_outdir = path_outdir)
        384         return is_success
        385 def deepdream_visualization(graph_or_path, value_feed_dict, layer, classes, input_tensor = None, path_logdir = './Log', path_outdir = "./Output"):
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tf_cnnvis-1.0.0-py3.5.egg\tf_cnnvis\tf_cnnvis.py in _get_visualization(graph_or_path, value_feed_dict, input_tensor, layers, path_logdir, path_outdir, method)
        149                         elif layers != None and layers.lower() in dict_layer.keys():
        150                                 layer_type = dict_layer[layers.lower()]
    --> 151                                 is_success = _visualization_by_layer_type(g, value_feed_dict, input_tensor, layer_type, method, path_logdir, path_outdir)
        152                         else:
        153                                 is_success = False
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tf_cnnvis-1.0.0-py3.5.egg\tf_cnnvis\tf_cnnvis.py in _visualization_by_layer_type(graph, value_feed_dict, input_tensor, layer_type, method, path_logdir, path_outdir)
        202 
        203         for layer in layers:
    --> 204                 is_success = _visualization_by_layer_name(graph, value_feed_dict, input_tensor, layer, method, path_logdir, path_outdir)
        205         return is_success
        206 
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tf_cnnvis-1.0.0-py3.5.egg\tf_cnnvis\tf_cnnvis.py in _visualization_by_layer_name(graph, value_feed_dict, input_tensor, layer_name, method, path_logdir, path_outdir)
        263                         elif method == "deconv":
        264                                 # deconvolution
    --> 265                                 results = _deconvolution(graph, sess, op_tensor, X, feed_dict)
        266                         elif method == "deepdream":
        267                                 # deepdream
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tf_cnnvis-1.0.0-py3.5.egg\tf_cnnvis\tf_cnnvis.py in _deconvolution(graph, sess, op_tensor, X, feed_dict)
        310                                                 c += 1
        311                                 if c > 0:
    --> 312                                         out.extend(sess.run(reconstruct[:c], feed_dict = feed_dict))
        313         return out
        314 def _deepdream(graph, sess, op_tensor, X, feed_dict, layer, path_outdir, path_logdir):
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
        787     try:
        788       result = self._run(None, fetches, feed_dict, options_ptr,
    --> 789                          run_metadata_ptr)
        790       if run_metadata:
        791         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
        982     # Create a fetch handler to take care of the structure of fetches.
        983     fetch_handler = _FetchHandler(
    --> 984         self._graph, fetches, feed_dict_string, feed_handles=feed_handles)
        985 
        986     # Run request and get response.
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tensorflow\python\client\session.py in __init__(self, graph, fetches, feeds, feed_handles)
        408     """
        409     with graph.as_default():
    --> 410       self._fetch_mapper = _FetchMapper.for_fetch(fetches)
        411     self._fetches = []
        412     self._targets = []
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tensorflow\python\client\session.py in for_fetch(fetch)
        228     elif isinstance(fetch, (list, tuple)):
        229       # NOTE(touts): This is also the code path for namedtuples.
    --> 230       return _ListFetchMapper(fetch)
        231     elif isinstance(fetch, dict):
        232       return _DictFetchMapper(fetch)
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tensorflow\python\client\session.py in __init__(self, fetches)
        335     """
        336     self._fetch_type = type(fetches)
    --> 337     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
        338     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)
        339 
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tensorflow\python\client\session.py in <listcomp>(.0)
        335     """
        336     self._fetch_type = type(fetches)
    --> 337     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
        338     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)
        339 
    
    C:\Users\username\Anaconda3\envs\tfGPU\lib\site-packages\tensorflow\python\client\session.py in for_fetch(fetch)
        225     if fetch is None:
        226       raise TypeError('Fetch argument %r has invalid type %r' %
    --> 227                       (fetch, type(fetch)))
        228     elif isinstance(fetch, (list, tuple)):
        229       # NOTE(touts): This is also the code path for namedtuples.
    
    TypeError: Fetch argument None has invalid type <class 'NoneType'>
    
    bug 
    opened by jubjamie 22
  • Object detection net(eg:faster_rcnn_resnet101) not worked with  deconv_visualization

    Object detection net(eg:faster_rcnn_resnet101) not worked with deconv_visualization

    Object detection net(eg:faster_rcnn_resnet101) not worked with deconv_visualization. With activation_visualization works well. Error:

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-4-9aba9f0fced7> in <module>()
          3                                      layers=layers,
          4                                      path_logdir=os.path.join("Log","Inception5"),
    ----> 5                                      path_outdir=os.path.join("Output","Inception5"))
    
    /notebooks/workspace/github/tf_cnnvis/tf_cnnvis/tf_cnnvis.py in deconv_visualization(sess_graph_path, value_feed_dict, input_tensor, layers, path_logdir, path_outdir)
        408 def deconv_visualization(sess_graph_path, value_feed_dict, input_tensor = None,  layers = 'r', path_logdir = './Log', path_outdir = "./Output"):
        409     is_success = _get_visualization(sess_graph_path, value_feed_dict, input_tensor = input_tensor, layers = layers, method = "deconv",
    --> 410         path_logdir = path_logdir, path_outdir = path_outdir)
        411     return is_success
        412 
    
    /notebooks/workspace/github/tf_cnnvis/tf_cnnvis/tf_cnnvis.py in _get_visualization(sess_graph_path, value_feed_dict, input_tensor, layers, path_logdir, path_outdir, method)
        167                 elif layer != None and layer.lower() in dict_layer.keys():
        168                     layer_type = dict_layer[layer.lower()]
    --> 169                     is_success = _visualization_by_layer_type(g, value_feed_dict, input_tensor, layer_type, method, path_logdir, path_outdir)
        170                 else:
        171                     print("Skipping %s . %s is not valid layer name or layer type" % (layer, layer))
    
    /notebooks/workspace/github/tf_cnnvis/tf_cnnvis/tf_cnnvis.py in _visualization_by_layer_type(graph, value_feed_dict, input_tensor, layer_type, method, path_logdir, path_outdir)
        225 
        226     for layer in layers:
    --> 227         is_success = _visualization_by_layer_name(graph, value_feed_dict, input_tensor, layer, method, path_logdir, path_outdir)
        228     return is_success
        229 
    
    /notebooks/workspace/github/tf_cnnvis/tf_cnnvis/tf_cnnvis.py in _visualization_by_layer_name(graph, value_feed_dict, input_tensor, layer_name, method, path_logdir, path_outdir)
        289         elif method == "deconv":
        290             # deconvolution
    --> 291             results = _deconvolution(graph, sess, op_tensor, X, feed_dict)
        292         elif method == "deepdream":
        293             # deepdream
    
    /notebooks/workspace/github/tf_cnnvis/tf_cnnvis/tf_cnnvis.py in _deconvolution(graph, sess, op_tensor, X, feed_dict)
        335                         c += 1
        336                 if c > 0:
    --> 337                     out.extend(sess.run(reconstruct[:c], feed_dict = feed_dict))
        338     return out
        339 def _deepdream(graph, sess, op_tensor, X, feed_dict, layer, path_outdir, path_logdir):
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
        898     try:
        899       result = self._run(None, fetches, feed_dict, options_ptr,
    --> 900                          run_metadata_ptr)
        901       if run_metadata:
        902         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
       1118     # Create a fetch handler to take care of the structure of fetches.
       1119     fetch_handler = _FetchHandler(
    -> 1120         self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
       1121 
       1122     # Run request and get response.
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py in __init__(self, graph, fetches, feeds, feed_handles)
        425     """
        426     with graph.as_default():
    --> 427       self._fetch_mapper = _FetchMapper.for_fetch(fetches)
        428     self._fetches = []
        429     self._targets = []
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py in for_fetch(fetch)
        243     elif isinstance(fetch, (list, tuple)):
        244       # NOTE(touts): This is also the code path for namedtuples.
    --> 245       return _ListFetchMapper(fetch)
        246     elif isinstance(fetch, dict):
        247       return _DictFetchMapper(fetch)
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py in __init__(self, fetches)
        350     """
        351     self._fetch_type = type(fetches)
    --> 352     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
        353     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)
        354 
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py in <listcomp>(.0)
        350     """
        351     self._fetch_type = type(fetches)
    --> 352     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
        353     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)
        354 
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py in for_fetch(fetch)
        240     if fetch is None:
        241       raise TypeError('Fetch argument %r has invalid type %r' % (fetch,
    --> 242                                                                  type(fetch)))
        243     elif isinstance(fetch, (list, tuple)):
        244       # NOTE(touts): This is also the code path for namedtuples.
    
    TypeError: Fetch argument None has invalid type <class 'NoneType'>
    

    Addition:

    Variable reconstruct (defined at line 327 in tf_cnnvis.py ) is [None, None, None, None, None, None, None, None]. TensorFlow version is 1.9. When I use TensorFlow 1.4, I got same Error as #26

    opened by jidebingfeng 14
  • deepdream_visualization error: invalid reduction dimension

    deepdream_visualization error: invalid reduction dimension

    When I call the deepdream_visualization function in the following way for MNIST classification,

    random_representation = mnist_data.test.next_batch(1)
    ran_x = preprocess_batch(random_representation[0], mean, std)
    ran_y = random_representation[1]
    feed_dict = {x: ran_x, y: ran_y, is_training: False, keep_probability: 1}
    deep_dream = True
    if deep_dream:
            layer = 'Conv/convolution'
            start = time.time()
            deepdream_visualization(graph_or_path=tf.get_default_graph(), value_feed_dict=feed_dict, layer=layer,
                                    classes=[1, 2, 3, 4, 5, 6, 7, 8, 9],
                                    input_tensor=None,
                                    path_logdir="C:/Users/bucpau/PycharmProjects/Academy/Logs/",
                                    path_outdir="C:/Users/bucpau/PycharmProjects/Academy/Visualization/")
            start = time.time() - start
            print("Total time for deconvolution visualization: {} Success: {}".format(start, is_success))
    

    I get an error: Traceback (most recent call last): File "C:/Users/bucpau/PycharmProjects/Academy/CNN_MNIST.py", line 156, in <module> main() File "C:/Users/bucpau/PycharmProjects/Academy/CNN_MNIST.py", line 153, in main visualize_layers(test_dict) File "C:/Users/bucpau/PycharmProjects/Academy/CNN_MNIST.py", line 85, in visualize_layers path_outdir="C:/Users/bucpau/PycharmProjects/Academy/Visualization/") File "C:\ProgramData\Anaconda3\lib\site-packages\tf_cnnvis-1.0.0-py3.6.egg\tf_cnnvis\tf_cnnvis.py", line 393, in deepdream_visualization File "C:\ProgramData\Anaconda3\lib\site-packages\tf_cnnvis-1.0.0-py3.6.egg\tf_cnnvis\tf_cnnvis.py", line 138, in _get_visualization File "C:\ProgramData\Anaconda3\lib\site-packages\tf_cnnvis-1.0.0-py3.6.egg\tf_cnnvis\tf_cnnvis.py", line 264, in _visualization_by_layer_name File "C:\ProgramData\Anaconda3\lib\site-packages\tf_cnnvis-1.0.0-py3.6.egg\tf_cnnvis\tf_cnnvis.py", line 317, in _deepdream File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1382, in reduce_mean name=name) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 1364, in _mean keep_dims=keep_dims, name=name) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op op_def=op_def) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2632, in create_op set_shapes_for_outputs(ret) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1911, in set_shapes_for_outputs shapes = shape_func(op) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1861, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 595, in call_cpp_shape_fn require_shape_fn) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 659, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Invalid reduction dimension 2 for input with 2 dimensions. for 'Mean_1' (op: 'Mean') with input shapes: [?,784], [3] and with computed input tensors: input[1] = <1 2 3>..

    Any ideas on what is happening here? Is it a bug or am I using the function incorrectly? The activation visualization works, but the deconvolution does not with the input feed_dict. The only addition to this function call in comparison to the activation visualization are the parameters classes and layer. Am I setting them correctly?

    bug: fixed 
    opened by pabucur 8
  • visualization of concatenation operation

    visualization of concatenation operation

    thank you for the handy visualization tool in my code, i have a concatenation operation (tf.concat) defined in the network, but the tool is unable to visualize any layer after the concatenation layer. would you please help to take a look?

    bug 
    opened by gapbridger 7
  • Questions about the name 'All_At_Once_Activations' in the tensorboard ?

    Questions about the name 'All_At_Once_Activations' in the tensorboard ?

    thank you for your pretty work. I have run the example 1 and get the visiualization on the tensorboard. Could you explain the meaning of ' All_At_Once_Activations ' , ' All_At_Once_Deconv ' and 'One_By_One_Deconv ' ?

    opened by douhaoexia 6
  • Example not working: No Layer with layer name = conv1...

    Example not working: No Layer with layer name = conv1...

    Hi! I got this error when I tried to run the example.. It says:

    No Layer with layer name = conv1 No Layer with layer name = conv2_1 No Layer with layer name = conv2_2 No Layer with layer name = conv3 No Layer with layer name = conv4_1 No Layer with layer name = conv4_2 No Layer with layer name = conv5_1 No Layer with layer name = conv5_2 Skipping. Too many featuremap. May cause memory errors. Skipping. Too many featuremap. May cause memory errors. No Layer with layer name = MaxPool No Layer with layer name = MaxPool_1 No Layer with layer name = MaxPool_2 No Layer with layer name = MaxPool_3 No Layer with layer name = MaxPool_4 Total Time = 39.663317

    When I tried to use the command: tf_cnnvis.get_visualization(graph_or_path = tf.get_default_graph(), value_feed_dict = feed_dict, input_tensor=None, layers=['r','p','c]', path_logdir='./Log', path_outdir='./Output', force=False, n=8) I tried it in a simple model with only 2 conv layers, 1 max_pool and 2 fc. It doesn't generate any outputs/log files..

    Thank you in advance for looking into the problem I'm having.

    bug: fixed 
    opened by shuang1330 6
  • PermissionDeniedError when importing frozen graph

    PermissionDeniedError when importing frozen graph

    Hi there,

    I'm trying to use the library with a pretrained model to visualise features. I think where I'm coming stuck is providing the graph to the activation_visualisation function (i.e. I don't think this is a problem with Tensorflow - I'm running other scripts that seem to work okay).

    import os
    import tensorflow as tf
    import numpy as np
    import cv2
    import tf_cnnvis as tfv
    from tensorflow.python.platform import gfile
    
    X = tf.placeholder(tf.float32, shape = [None, 48, 64, 3]) # placeholder for input images
    im = np.array(cv2.imread("test.jpg"))
    
    with tf.Session() as sess:
        model_filename = "saved_model.pb"
        with gfile.FastGFile(model_filename, 'rb') as f:
            graph_def = tf.GraphDef()
            graph_def.ParseFromString(f.read())
            tf.import_graph_def(graph_def)
    
        is_success = tfv.activation_visualization(sess_graph_path = tf.get_default_graph(), value_feed_dict = {X : im})
    
    sess.close()
    

    I'm still fairly new to Tensorflow, so it may be an issue with where I'm working in the session, but I've tried jostling the activation_visualization function around to no avail.

    This is the error message I get.

    2018-08-13 14:43:16.446408: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1098] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7534 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0, compute capability: 6.1)
    2018-08-13 14:43:16.446561: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1098] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 7534 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080, pci bus id: 0000:03:00.0, compute capability: 6.1)
    2018-08-13 14:43:16.446685: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1098] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 7534 MB memory) -> physical GPU (device: 2, name: GeForce GTX 1080, pci bus id: 0000:81:00.0, compute capability: 6.1)
    2018-08-13 14:43:16.447510: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1098] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 7534 MB memory) -> physical GPU (device: 3, name: GeForce GTX 1080, pci bus id: 0000:82:00.0, compute capability: 6.1)
    2018-08-13 14:43:16.475137: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at save_restore_v2_ops.cc:109 : Permission denied: model; Permission denied
    Traceback (most recent call last):
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1292, in _do_call
        return fn(*args)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1277, in _run_fn
        options, feed_dict, fetch_list, target_list, run_metadata)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1367, in _call_tf_sessionrun
        run_metadata)
    tensorflow.python.framework.errors_impl.PermissionDeniedError: model; Permission denied
    	 [[{{node save/SaveV2}} = SaveV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/SaveV2/tensor_names, save/SaveV2/shape_and_slices, fake_var/_1)]]
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "visualise_testv2.py", line 24, in <module>
        is_success = tfv.activation_visualization(sess_graph_path = tf.get_default_graph(), value_feed_dict = {X : im})
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 406, in activation_visualization
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 131, in _get_visualization
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 79, in _save_model
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1417, in save
        {self.saver_def.filename_tensor_name: checkpoint_file})
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 887, in run
        run_metadata_ptr)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1110, in _run
        feed_dict_tensor, options, run_metadata)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1286, in _do_run
        run_metadata)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1308, in _do_call
        raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.PermissionDeniedError: model; Permission denied
    	 [[{{node save/SaveV2}} = SaveV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/SaveV2/tensor_names, save/SaveV2/shape_and_slices, fake_var/_1)]]
    
    Caused by op 'save/SaveV2', defined at:
      File "visualise_testv2.py", line 24, in <module>
        is_success = tfv.activation_visualization(sess_graph_path = tf.get_default_graph(), value_feed_dict = {X : im})
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 406, in activation_visualization
        path_logdir = path_logdir, path_outdir = path_outdir)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 131, in _get_visualization
        PATH = _save_model(sess_graph_path)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 77, in _save_model
        saver = tf.train.Saver()
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1078, in __init__
        self.build()
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1090, in build
        self._build(self._filename, build_save=True, build_restore=True)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1127, in _build
        build_save=build_save, build_restore=build_restore)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 784, in _build_internal
        save_tensor = self._AddSaveOps(filename_tensor, saveables)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 284, in _AddSaveOps
        save = self.save_op(filename_tensor, saveables)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 202, in save_op
        tensors)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1690, in save_v2
        shape_and_slices=shape_and_slices, tensors=tensors, name=name)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
        op_def=op_def)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
        return func(*args, **kwargs)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3260, in create_op
        op_def=op_def)
      File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1748, in __init__
        self._traceback = tf_stack.extract_stack()
    
    PermissionDeniedError (see above for traceback): model; Permission denied
    	 [[{{node save/SaveV2}} = SaveV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/SaveV2/tensor_names, save/SaveV2/shape_and_slices, fake_var/_1)]]
    

    As an aside, would it be possible in future to provide protobuf files directly as input to the function? As I said, I'm new to Tensorflow, so I'm not sure how easy it would be.

    stat: awaiting response 
    opened by tm2313 5
  • AttributeError: module 'tensorflow.python.ops.gen_nn_ops' has no attribute '_relu_grad'

    AttributeError: module 'tensorflow.python.ops.gen_nn_ops' has no attribute '_relu_grad'

    Currently, I'm using tensorflow 1.8. When I run the example code with the deconv_visualization() function.

    It returns error:

      File "/Yang/project/detection/code/tf18/third_party/tf_cnnvis/tf_cnnvis/tf_cnnvis.py", line 43, in _GuidedReluGrad
        return tf.where(0. < grad, gen_nn_ops._relu_grad(grad, op.outputs[0]), tf.zeros_like(grad))
    AttributeError: module 'tensorflow.python.ops.gen_nn_ops' has no attribute '_relu_grad'
    

    Do you have any idea about that. I'm really appreciate that!

    stat: resolved 
    opened by foreverYoungGitHub 5
  • Usage for custom CNN

    Usage for custom CNN

    Hi, Thanks for you work. I tried your example: example/tf_cnnvis_Example1;ipynb, it works well with AlexNet.

    So I adapted this function activation_visualization with my own CNN architecture defined like this:

    Conv1_1+relu+conv1_2+relu+pool1+conv2_1+relu+conv2_2+relu+pool2+fc1

    The activation map was successfully saved for conv1_1 and conv1_2, but it gave error for conv2_1 at this function: def _activation(graph, sess, op_tensor, feed_dict): with graph.as_default() as g: with sess.as_default() as sess: act = sess.run(op_tensor, feed_dict = feed_dict) return act

    The error from this: act = sess.run(op_tensor, feed_dict = feed_dict), I think it is the problem of the shapes for op_tensor (?, 112, 112, 64) and feed_dict (?, 224, 224, 3)..

    The error is: InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_2' with dtype float

    As my CNN is slightly different with AlexNet, I did not use any local response normalization. So my question is, is your visualizer works for general CNN model or a specific architecture?

    Thanks in advance

    Look forward to your response.

    Hao

    stat: awaiting response 
    opened by anbai106 5
  • Deepdream for Mobilenet V2 COCO model

    Deepdream for Mobilenet V2 COCO model

    I'm trying deepdream_visualization on ssdlite_mobilenet_v2_coco model by modifying the tf_cnnvis_Example3.ipynb. I have set the input layer to FeatureExtractor/MobilenetV2/MobilenetV2/input and visualisation layer to import/FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/BatchNorm/batchnorm/add_1

    Given these layers, deepdream_visualization generated uninterpretable image as shown below image_1

    I'm not sure if I have chosen the correct layers for input and visualization. Any pointers on this would be very useful

    Thanks

    stat: awaiting response 
    opened by endeepak 4
  • No output to log or output directory

    No output to log or output directory

    Hello, I'm rather new to all this so forgive me if this is a daft question. However, i just can't get the examples to work with my data. I've basically followed the "Training" section of https://www.tensorflow.org/tutorials/image_retraining and have had some good results from the resulting model. However, i want to visualise them and your library seems perfect for the job!

    What am i doing wrong in the code below? I get no errors and 'is_success' is True so it appears to have worked?

    import os
    import sys
    import time
    import copy
    import h5py
    import numpy as np
    
    from tf_cnnvis import *
    
    import tensorflow as tf
    from scipy.misc import imread, imresize
    
    t_input = tf.placeholder(np.float32, name='Placeholder') # define the input tensor
    
    graph = tf.Graph()
    graph_def = tf.GraphDef()
    
    with open('./path/to/my/graph.pb', "rb") as f:
      graph_def.ParseFromString(f.read())
    
    with graph.as_default():
      tf.import_graph_def(graph_def)
    
    # reading sample image
    im = np.expand_dims(imread(os.path.join("./", "FILENAME.JPG")), axis = 0)
    
    tensor_names = [t.name for op in graph.get_operations() for t in op.values()]
    print(tensor_names[0])
    print(tensor_names[len(tensor_names) - 1])
    
    start = time.time()
    # api call
    
    is_success  = activation_visualization(
                                                sess_graph_path=tf.get_default_graph(),
                                                value_feed_dict = {t_input : im}, 
                                                input_tensor=tensor_names[len(tensor_names) - 1],
                                                layers=[ 'r', 'p', 'c'], 
                                                path_outdir="./output",
                                                path_logdir="./log"
                                             )
    start = time.time() - start
    print("Total Time = %f" % (start))
    print(is_success)
    

    The main things i don't seem to understand is the value_feed_dict, the input_tensor and the t_input / im variables.

    I'm sure i'm doing something daft but would be trivial for someone in the know to spot what is wrong.

    Thank you!

    stat: resolved 
    opened by afeltham 4
  • Do you have an example of using tf_cnnvis with a keras model?

    Do you have an example of using tf_cnnvis with a keras model?

    Hello, I would like to ask if any of you have an example of using tf_cnnvis with a keras model, I'm still using tensorflow 1.13 due to some problems with dependencies and custom layers.

    Thanks in advance

    opened by aendrs 0
  • tf-nightly-gpu not detected?

    tf-nightly-gpu not detected?

    i have an installation of tensorflow nightly gpu (tf v2.5.x) when following the installation steps, after executing the setup.py, the process ends with:

    Finished processing dependencies for tf-cnnvis==1.0.0 Please install TenSorflow with 'pip install tensorflow'

    when i list my installed packages with conda list tf_cnnvis is not listed importing it in notebooks does not work is this a bug or am i missing something?

    opened by ghylander 1
  • How to generate single class based feature

    How to generate single class based feature

    I have similar network and I want to generate single class based feature as shown in readme.md
    But in document using deepdream you are only visualizing layer not one class? So how we extract one class based feature?

    So please guide us for our research ....🙏

    opened by robonetphy 2
  • Method for deconvolution

    Method for deconvolution

    Thank you for this fantastic source code. I am new to this field and trying to understand how you do deconvolution. In your ReadMe.md, you refer to the paper "Visualizing and Understanding Convolutional Networks by Matthew D. Zeiler and Rob Fergus" that refer to another paper "Deconvolutional networks by Zeiler et al 2010". Is the algorithm in "Deconvolutional networks" paper the one that you implemented in the source code?

    opened by vatthaphon 0
  • Code won't run: ImportError: cannot import name 'imsave' from 'scipy.misc'

    Code won't run: ImportError: cannot import name 'imsave' from 'scipy.misc'

    I ran all of these from InFoCusp/tf_cnnvis

    pip install setuptools pip install six python setup.py install python setup.py clean

    from scipy.misc import imsave # in *** init.py *** of tf_cnnvis

    Error: ImportError: cannot import name 'imsave' from 'scipy.misc' (C:\projects\TensorFlow-MIL\venv\lib\site-packages\scipy\misc_init_.py)

    What am I missing ???

    opened by scottmason2000 3
Releases(leviosa)
  • leviosa(Mar 15, 2019)

  • v1.1.0(Feb 2, 2018)

    What's new

    • Major bug fixes
    • Check for frozen graph added
    • Cleaner output folder structure
    • Deep dream now supports single channel inputs
    • More generic setup.py
    • Examples updated

    Thanks to our contributors:

    @csggnn , @b8horpet, @SebastienDebia , @javiribera @BhagyeshVikani @falaktheoptimist

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Apr 18, 2017)

    Supported Convolutional Neural Network visualization techniques :

    1. Based on the paper Visualizing and Understanding Convolutional Networks by Matthew D. Zeiler and Rob Fergus. The goal here is to reconstruct the input image from the information contained in any given layers of the convolutional neural network.
    2. CNN visualization based on Deep dream. Here's the relevant blog post explaining the technique. In essence, it attempts to construct an input image that maximizes the activation for a given output.
    Source code(tar.gz)
    Source code(zip)
Owner
InFoCusp
InFoCusp
DVG-Face: Dual Variational Generation for Heterogeneous Face Recognition, TPAMI 2021

DVG-Face: Dual Variational Generation for HFR This repo is a PyTorch implementation of DVG-Face: Dual Variational Generation for Heterogeneous Face Re

52 Dec 30, 2022
Facilitates implementing deep neural-network backbones, data augmentations

Introduction Nowadays, the training of Deep Learning models is fragmented and unified. When AI engineers face up with one specific task, the common wa

40 Dec 29, 2022
Machine learning framework for both deep learning and traditional algorithms

NeoML is an end-to-end machine learning framework that allows you to build, train, and deploy ML models. This framework is used by ABBYY engineers for

NeoML 704 Dec 27, 2022
Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

Memory Efficient Attention Pytorch Implementation of a memory efficient multi-head attention as proposed in the paper, Self-attention Does Not Need O(

Phil Wang 180 Jan 05, 2023
From a body shape, infer the anatomic skeleton.

OSSO: Obtaining Skeletal Shape from Outside (CVPR 2022) This repository contains the official implementation of the skeleton inference from: OSSO: Obt

Marilyn Keller 166 Dec 28, 2022
This is the official Pytorch implementation of "Lung Segmentation from Chest X-rays using Variational Data Imputation", Raghavendra Selvan et al. 2020

README This is the official Pytorch implementation of "Lung Segmentation from Chest X-rays using Variational Data Imputation", Raghavendra Selvan et a

Raghav 42 Dec 15, 2022
DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe.

DeepLab Introduction DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe. It combines densely-compute

Ali 234 Nov 14, 2022
Multi-scale discriminator feature-wise loss function

Multi-Scale Discriminative Feature Loss This repository provides code for Multi-Scale Discriminative Feature (MDF) loss for image reconstruction algor

Graphics and Displays group - University of Cambridge 76 Dec 12, 2022
Unofficial implementation of PatchCore anomaly detection

PatchCore anomaly detection Unofficial implementation of PatchCore(new SOTA) anomaly detection model Original Paper : Towards Total Recall in Industri

Changwoo Ha 268 Dec 22, 2022
AgeGuesser: deep learning based age estimation system. Powered by EfficientNet and Yolov5

AgeGuesser AgeGuesser is an end-to-end, deep-learning based Age Estimation system, presented at the CAIP 2021 conference. You can find the related pap

5 Nov 10, 2022
Pretraining on Dynamic Graph Neural Networks

Pretraining on Dynamic Graph Neural Networks Our article is PT-DGNN and the code is modified based on GPT-GNN Requirements python 3.6 Ubuntu 18.04.5 L

7 Dec 17, 2022
Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness (NeurIPS2021) This repository contains code for the paper "Smo

Jongheon Jeong 17 Dec 27, 2022
VQMIVC - Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion (Interspeech

Disong Wang 262 Dec 31, 2022
Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering

Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering Modular Primitives for High-Performance Differentiable Rendering Samuli

NVIDIA Research Projects 675 Jan 06, 2023
PyTorch implementation of Rethinking Positional Encoding in Language Pre-training

TUPE PyTorch implementation of Rethinking Positional Encoding in Language Pre-training. Quickstart Clone this repository. git clone https://github.com

Jake Tae 5 Jan 27, 2022
Website for D2C paper

D2C This is the repository that contains source code for the D2C Website. If you find D2C useful for your work please cite: @article{sinha2021d2c au

1 Oct 21, 2021
MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift

MemStream Implementation of MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift . Siddharth Bhatia, Arjit Jain, Shivi

Stream-AD 61 Dec 02, 2022
Yet Another Reinforcement Learning Tutorial

This repo contains self-contained RL implementations

Sungjoon 65 Dec 10, 2022
Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020)

U-GAT-IT — Official TensorFlow Implementation (ICLR 2020) : Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization fo

Junho Kim 6.2k Jan 04, 2023
使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,包含C++和Python两种版本的程序实现。本套程序只依赖opencv库就可以运行, 从而彻底摆脱对任何深度学习框架的依赖。

YOLOP-opencv-dnn 使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,依然是包含C++和Python两种版本的程序实现 onnx文件从百度云盘下载,链接:https://pan.baidu.com/s/1A_9cldU

178 Jan 07, 2023