TensorFlow implementation of ENet

Overview

TensorFlow-ENet

TensorFlow implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation.

This model was tested on the CamVid dataset with street scenes taken from Cambridge, UK. For more information on this dataset, please visit: http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/.

Requirements: TensorFlow >= r1.2

Visualizations

Note that the gifs may be out of sync if the network doesn't load them together. You can refresh your page to see them in sync.

Test Dataset Output

CamVid Test Dataset Output CamVid Test Dataset Output

TensorBoard Visualizations

Execute tensorboard --logdir=log on your root directory to monitor your training and watch your segmentation output form against the ground truth and the original image as you train your model.

Contents

Code

  • enet.py: The ENet model definition, including the argument scope.

  • train_enet.py: The file for training. Includes saving of images for visualization and tunable hyperparameters.

  • test_enet.py: The file for evaluating on the test dataset. Includes option to visualize images as well.

  • preprocessing.py: The preprocessing does just image resizing, just in case anyone wants to use a smaller image size due to memory issues or for other datasets.

  • predict_segmentation.py: Obtains the segmentation output for visualization purposes. You can create your own gif with these outputs.

  • get_class_weights.py: The file to obtain either the median frequency balancing class weights, or the custom ENet function class weights.

  • train.sh: Example training script to train the different variations of the model.

  • test.sh Example testing script to test the different variants you trained.

Folders

  • dataset: Contains 6 folders that holds the original train-val-test images and their corresponding ground truth annotations.

  • checkpoint: The checkpoint directory that could be used for predicting the segmentation output. The model was trained using the default parameters mentioned in the paper, except that it uses median frequency balancing to obtain the class weights. The final checkpoint model size is under 5MB.

  • visualizations: Contains the gif files that were created from the output of predict_segmentation.py.

Important Notes

  1. As the Max Unpooling layer is not officially available from TensorFlow, a manual implementation was used to build the decoder portion of the network. This was based on the implementation suggested in this TensorFlow github issue.

  2. Batch normalization and 2D Spatial Dropout are still retained during testing for good performance.

  3. Class weights are used to tackle the problem of imbalanced classes, as certain classes appear more dominantly than others. More notably, the background class has weight of 0.0, in order to not reward the model for predicting background.

  4. On the labels and colouring scheme: The dataset consists of only 12 labels, with the road-marking class merged with the road class. The last class is the unlabelled class.

  5. No preprocessing is done to the images for ENet. (see references below on clarifications with author).

  6. Once you've fine-tuned to get your best hyperparameters, there's an option to combine the training and validation datasets together. However, if your training dataset is large enough, this won't make a lot of difference.

Implementation and Architectural Changes

  1. Skip connections can be added to connect the corresponding encoder and decoder portions for better performance.

  2. The number of initial blocks and the depth of stage 2 residual bottlenecks are tunable hyperparameters. This allows you to build a deeper network if required, since ENet is rather lightweight.

  3. Fused batch normalization is used over standard batch normalization for faster computations. See TensorFlow's best practices.

  4. To obtain the class weights for computing the weighted loss, Median Frequency Balancing (MFB) is used by default instead of the custom ENet class weighting function. This is due to an observation that MFB gives a slightly better performance than the custom function, at least on my machine. However, the option of using the ENet custom class weights is still possible.

References

  1. ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation
  2. Implementation of Max Unpooling
  3. Implementation of PReLU
  4. Clarifications from ENet author
  5. Original Torch implementation of ENet
  6. ResNet paper for clarification on residual bottlenecks
  7. Colouring scheme

Feedback and Bugs

This implementation may not be entirely correct and may contain bugs. It would be great if the open source community can spot any bugs and raise a github issue/submit a pull request to fix those bugs if any!

Citation

If you are using this work for your research, please consider citing:

@misc{kwot_sin_lee_2017_3403269,
  author       = {Kwot Sin Lee},
  title        = {kwotsin/TensorFlow-ENet: DOI},
  month        = jun,
  year         = 2017,
  doi          = {10.5281/zenodo.3403269},
  url          = {https://doi.org/10.5281/zenodo.3403269}
}

DOI

Comments
  • Adding the input output node to freeze the graph for only inference purpose

    Adding the input output node to freeze the graph for only inference purpose

    I was trying to freeze the graph however you are using tensorflow input pipeline instead of a placeholder. Could you please explain, how to remove input pipeline and add a node for reading input image?

    Thanks!

    opened by chandrakantkhandelwal 17
  • Porting this model to serve it in Android

    Porting this model to serve it in Android

    Hi @kwotsin This is brilliant work. Thanks for sharing this. I want to freeze this model and serve it through an android app. Can it be done? Do you have any pointers as to where to start from..

    opened by anandcu3 11
  • Problem occurred~

    Problem occurred~

    InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'MaxPoolWithArgmax' with these attrs. Registered devices: [CPU], Registered kernels: device='GPU'; Targmax in [DT_INT64]; T in [DT_DOUBLE] device='GPU'; Targmax in [DT_INT64]; T in [DT_FLOAT] device='GPU'; Targmax in [DT_INT64]; T in [DT_HALF]

     [[Node: ENet_1/bottleneck1_0_main_max_pool = MaxPoolWithArgmax[T=DT_FLOAT, Targmax=DT_INT64, ksize=[1, 2, 2, 1], padding="SAME", strides=[1, 2, 2, 1]](ENet_1/initial_block_1_concat)]]
    
    opened by pandamax 5
  • Error when train with new data

    Error when train with new data

    I tried to train the Enet with own data but it caused a error. It is a problem with FIFOQueue. Our data have 715 images for training and 105 for validation and 100 for testing.

    Can you help me to figure out this problem. Thanks a lot. Anh

    
    2017-08-15 16:48:45.882942: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    2017-08-15 16:48:45.883176: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    2017-08-15 16:48:45.883198: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    2017-08-15 16:48:45.883214: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    2017-08-15 16:48:45.883861: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    2017-08-15 16:48:45.886415: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    Traceback (most recent call last):
      File "train_enet.py", line 357, in <module>
        run()
      File "train_enet.py", line 172, in run
        with slim.arg_scope(ENet_arg_scope(weight_decay=weight_decay)):
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/contextlib.py", line 35, in __exit__
        self.gen.throw(type, value, traceback)
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 964, in managed_session
        self.stop(close_summary_writer=close_summary_writer)
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 792, in stop
        stop_grace_period_secs=self._stop_grace_secs)
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/training/coordinator.py", line 389, in join
        six.reraise(*self._exc_info_to_raise)
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/training/queue_runner_impl.py", line 238, in _run
        enqueue_callable()
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1063, in _single_operation_run
        target_list_as_strings, status, None)
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/contextlib.py", line 24, in __exit__
        self.gen.next()
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
        pywrap_tensorflow.TF_GetCode(status))
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape mismatch in tuple component 1. Expected [202,360,1], got [202,360,3]
    	 [[Node: batch/fifo_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, Squeeze/_2975, Squeeze_1/_2977)]]
    
    
    opened by ducanh841988 5
  • performance issue about inference (0.5~1 fps)

    performance issue about inference (0.5~1 fps)

    hi, Kwotsin Thank you so much for the implementation! I trained the model with my data and got much better mIOU comparing to legacy method.

    However, the inference performance is not good as the stats from the paper. My result is about 0.5~1 fps for 1280x720 image, on GeForce GTX 1080. My class number is 2 so it might save much MIPS. I run the inference with OpenCV real-time images.

    Any idea about the root cause?

    thanks a lot, BRS, Dell

    opened by DellBrother 3
  • OOM when allocating tensor with shape[10,45,60,128]

    OOM when allocating tensor with shape[10,45,60,128]

    Caused by op 'ENet/Relu_67', defined at: File "D:/Python/code/Semantic Segmentation/TensorFlow-ENet-master/train_enet.py", line 337, in run() File "D:/Python/code/Semantic Segmentation/TensorFlow-ENet-master/train_enet.py", line 162, in run skip_connections=skip_connections) File "D:\Python\code\Semantic Segmentation\TensorFlow-ENet-master\enet.py", line 455, in ENet net = bottleneck(net, output_depth=128, filter_size=3, dilated=True, dilation_rate=16, scope='bottleneck'+str(i)+'_8') File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 181, in func_with_args return func(*args, **current_args) File "D:\Python\code\Semantic Segmentation\TensorFlow-ENet-master\enet.py", line 264, in bottleneck net = prelu(net, scope=scope+'_prelu4') File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 181, in func_with_args return func(*args, **current_args) File "D:\Python\code\Semantic Segmentation\TensorFlow-ENet-master\enet.py", line 35, in prelu pos = tf.nn.relu(x) File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 2272, in relu result = _op_def_lib.apply_op("Relu", features=features, name=name) File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op op_def=op_def) File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2506, in create_op original_op=self._default_original_op, op_def=op_def) File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1269, in init self._traceback = _extract_stack()

    ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[10,45,60,128]

    opened by louiskoo520 3
  • The accuracy is low when i train on my own dataset

    The accuracy is low when i train on my own dataset

    I want to use this model for my own dataset(classes 2:car and backgroud) However the accuracy is so low with the loss falling the mean_iou is falling 。the model is not work for this dataset
    what is the problem?

    opened by jzx-gooner 2
  • How to use predict for real-time webcam?

    How to use predict for real-time webcam?

    Hi, Thank you for providing a good code.

    I have a one question.

    Can I use predict code in real time in a webcam? If there is a way, can you let me know?

    Regard.

    opened by zmqp111 2
  • please help me, tell me how to predict the video frame

    please help me, tell me how to predict the video frame

    I modefied the code file:predict_segmentation.py to predict the video frame, but there are problems, please help me , Thanks ! here is the error: +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ python pre_video.py /home/yh/.conda/envs/yh/lib/python3.6/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters Traceback (most recent call last): File "pre_video.py", line 122, in skip_connections=skip_connections) File "/home/yh/Work/TensorFlow-ENet/enet.py", line 464, in ENet pooling_indices=pooling_indices_2, output_shape=inputs_shape_2, scope=bottleneck_scope_name+'_0') File "/home/yh/.conda/envs/yh/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 183, in func_with_args return func(*args, **current_args) File "/home/yh/Work/TensorFlow-ENet/enet.py", line 321, in bottleneck net_unpool = unpool(net_unpool, pooling_indices, output_shape=output_shape, scope='unpool') File "/home/yh/Work/TensorFlow-ENet/enet.py", line 101, in unpool y = mask // (output_shape[2] * output_shape[3]) TypeError: unsupported operand type(s) for *: 'NoneType' and 'int' +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    here is my modefied code:

    import tensorflow as tf import os import matplotlib.pyplot as plt from enet import ENet, ENet_arg_scope from preprocessing import preprocess from scipy.misc import imsave import numpy as np import cv2 import time

    slim = tf.contrib.slim

    #指定GPU os.environ["CUDA_VISIBLE_DEVICES"]="1"

    W = 500 H = 400

    batch_size = 1 num_classes = 2

    path = '/home/yh/data/dataset/zhijia/' filename = 'day.avi' #'20170920_124050_872.avi' cap = cv2.VideoCapture(path+filename) #size=(W,H) fourcc = cv2.VideoWriter_fourcc('M','J','P','G') #opencv3.0

    checkpoint_dir = "./checkpoint_mfb" checkpoint = tf.train.latest_checkpoint(checkpoint_dir)

    num_initial_blocks = 1 skip_connections = False stage_two_repeat = 2 ''' #Labels to colours are obtained from here: https://github.com/alexgkendall/SegNet-Tutorial/blob/c922cc4a4fcc7ce279dd998fb2d4a8703f34ebd7/Scripts/test_segmentation_camvid.py

    However, the road_marking class is collapsed into the road class in the dataset provided.

    Classes:

    Sky = [128,128,128] Building = [128,0,0] Pole = [192,192,128] Road_marking = [255,69,0] Road = [128,64,128] Pavement = [60,40,222] Tree = [128,128,0] SignSymbol = [192,128,128] Fence = [64,64,128] Car = [64,0,128] Pedestrian = [64,64,0] Bicyclist = [0,128,192] Unlabelled = [0,0,0] ''' label_to_colours = {0: [0,0,0], 1: [0,128,0], 2: [192,192,128], 3: [128,64,128], 4: [60,40,222], 5: [128,128,0], 6: [192,128,128], 7: [64,64,128], 8: [64,0,128], 9: [64,64,0], 10: [0,128,192], 11: [128,128,128]}

    #Create the photo directory photo_dir = checkpoint_dir + "/test_images" if not os.path.exists(photo_dir): os.mkdir(photo_dir)

    def vis_segmentation(image, seg_map): """Visualizes input image, segmentation map and overlay view."""

    image_width, image_height = image.size
    colored_label = label_to_color_image(seg_map).astype(np.uint8)
    image_empty = np.zeros((image_height,2*image_width,3),np.uint8)
    image_empty[:image_height,:image_width] = image.copy()
    image_empty[:image_height,image_width:] = colored_label.copy()
    image_empty[:image_height,:image_width] = image.copy()
    image_empty[:image_height,image_width:] = colored_label.copy()
    
    alpha = 0.35
    beta = 1-alpha
    gamma = 0
    img_add = cv2.addWeighted(np.array(image), alpha, seg_map, beta, gamma)
    return img_add
    

    #Create a function to convert each pixel label to colour. def grayscale_to_colour(image): print('Converting image...') image = image.reshape((H, W, 1)) image = np.repeat(image, 3, axis=-1) for i in range(image.shape[0]): for j in range(image.shape[1]): label = int(image[i][j][0]) image[i][j] = np.array(label_to_colours[label])

    return image
    

    def model_run(image):

    return predictions   
    

    with tf.Graph().as_default() as graph:

    image_tensor = tf.placeholder(tf.float32, [None, None, 3])
    images = tf.expand_dims(image_tensor,0)
    
    #Create the model inference
    with slim.arg_scope(ENet_arg_scope()):
        logits, probabilities = ENet(images,
                                     num_classes=num_classes,
                                     batch_size=batch_size,
                                     is_training=False,
                                     reuse=None,
                                     num_initial_blocks=num_initial_blocks,
                                     stage_two_repeat=stage_two_repeat,
                                     skip_connections=skip_connections)
    
    variables_to_restore = slim.get_variables_to_restore()
    saver = tf.train.Saver(variables_to_restore)
    def restore_fn(sess):
        return saver.restore(sess, checkpoint)
    
    predictions = tf.argmax(probabilities, -1)
    predictions = tf.cast(predictions, tf.float32)
    print('HERE', predictions.get_shape())
    
    sv = tf.train.Supervisor(logdir=None, init_fn=restore_fn)
    
    with sv.managed_session() as sess:
         now = 0.0
         while(cap.isOpened()):
              ret, frame = cap.read()
              if (ret == False):
                  print('~~~~~~~~~~~~~~~~~~did not get any frame~~~~~~~~~~~~~~~~~~')
                  break  
              image = frame.copy()             
              image = np.asarray(image, np.float32)/255           
              print('~~~~~~~~~~~~~~',sess.run(image_tensor))
              segmentations = sess.run(predictions, feed_dict={image_tensor:image})
    
              #cv2.imshow('pre',segmentations[0])
              #cv2.waitKey(0)
    
              #T = time.time() - now
              #print(int(1/T))
              #now = time.time()
    

    cap.release() cv2.destroyAllWindows()

    ===============================================================================

    opened by electronicYH 2
  • Using the network on 4-Channel Images

    Using the network on 4-Channel Images

    I'm trying to use the network to train on 4-Channel Images. The changes in the code are

    tf.image.decode_image(image, channels=4) in train_enet.py

    and changed the number of channels in preprocessing.py. Now when I try to train the network I get the following error

    tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot reshape a tensor with 734400 elements to shape [4,172800] (691200 elements) for 'ENet/unpool_1/Reshape_1' (op: 'Reshape') with input shapes: [4,1,90,120,17], [2] and with input tensors computed as partial shapes: input[1] = [4,172800].

    Traceback (most recent call last):
      File "train_enet.py", line 337, in <module>
     run()
      File "train_enet.py", line 162, in run
    	skip_connections=skip_connections)
      File "enet.py", line 476, in ENet
    	pooling_indices=pooling_indices_1, output_shape=inputs_shape_1, scope=bottleneck_scope_name+'_0')
      File "Anaconda3\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 181, in func_with_args
    	return func(*args, **current_args)
      File "enet.py", line 321, in bottleneck
    	net_unpool = unpool(net_unpool, pooling_indices, output_shape=output_shape, scope='unpool')
      File "enet.py", line 108, in unpool
    	indices = tf.transpose(tf.reshape(tf.stack([b, y, x, f]), [4, updates_size]))
      File "Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 2451, in reshape
    	name=name)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
    	op_def=op_def)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2508, in create_op
    	set_shapes_for_outputs(ret)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1873, in set_shapes_for_outputs
    	shapes = shape_func(op)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1823, in call_with_requiring
    	return call_cpp_shape_fn(op, require_shape_fn=True)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 610, in call_cpp_shape_fn
    	debug_python_shape_fn, require_shape_fn)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 676, in _call_cpp_shape_fn_impl
    	raise ValueError(err.message)
    ValueError: Cannot reshape a tensor with 734400 elements to shape [4,172800] (691200 elements) for 'ENet/unpool_1/Reshape_1' (op: 'Reshape') with input shapes: [4,1,90,120,17], [2] and with input tensors computed as partial shapes: input[1] = [4,172800].
    

    Do I need to make any other changes ? Will this network even work for 4-channel images?

    opened by anandcu3 2
  • LookupError: No gradient defined for operation 'ENet/bottleneck2_0_main_max_pool' (op type: MaxPoolWithArgmax)

    LookupError: No gradient defined for operation 'ENet/bottleneck2_0_main_max_pool' (op type: MaxPoolWithArgmax)

    When I run ./train.sh, I find this error. Is this because of the Tensorflow version or I need to define the gradient of ENet/bottleneck2_0_main_max_pool myself? Thx!

    opened by zhangxgu 2
  • demo code running problem

    demo code running problem

    Hello, when I using your demo code for training, I met a problem : Caused by op u'ENet/bottleneck1_0_conv3/Conv2D', defined at: File "train_enet.py", line 341, in run() File "train_enet.py", line 162, in run skip_connections=skip_connections) File "/home/yt/TensorFlow-ENet/enet.py", line 433, in ENet net, pooling_indices_1, inputs_shape_1 = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01, downsampling=True, scope='bottleneck1_0') File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args return func(*args, **current_args) File "/home/yt/TensorFlow-ENet/enet.py", line 223, in bottleneck net = slim.conv2d(net, output_depth, [1,1], scope=scope+'_conv3') File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args return func(*args, **current_args) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1057, in convolution outputs = layer.apply(inputs) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 762, in apply return self.call(inputs, *args, **kwargs) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 652, in call outputs = self.call(inputs, *args, **kwargs) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/layers/convolutional.py", line 167, in call outputs = self._convolution_op(inputs, self.kernel) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 838, in call return self.conv_op(inp, filter) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 502, in call return self.call(inp, filter) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 190, in call name=self.name) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 639, in conv2d data_format=data_format, dilations=dilations, name=name) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3160, in create_op op_def=op_def) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1625, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

    InternalError (see above for traceback): Blas SGEMM launch failed : m=108000, n=64, k=4 [[Node: ENet/bottleneck1_0_conv3/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](ENet/add_2, ENet/bottleneck1_0_conv3/weights/read)]] [[Node: Adam/update_ENet/bottleneck2_5_batch_norm1/beta/ApplyAdam/_8812 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_17711_Adam/update_ENet/bottleneck2_5_batch_norm1/beta/ApplyAdam", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

    my environment setup is: 2080ti tensorflow-gpu=1.5.0 python=2.7 can you please help me? and where to indicated which gpu to run the training code, I didn't find it

    opened by lapetite123 0
  • ValueError: Dimensions must be equal when trained my dataset

    ValueError: Dimensions must be equal when trained my dataset

    I runed the demo code, it perfect and no errors.

    but shows errors when using my own dataset.

    the image size is 400x400, the annotated image has same size, the background is 0, the target eara is 1.

    (py36) [[email protected] TensorFlow-ENet]$ python train_enet.py --weighting="ENET" --num_epochs=300 --logdir="./log/train_original_E Net_knot" --num_classes=2 --image_width=400 --image_height=400
    ========= ENet Class Weights =========
    [1.8298322321992648, 3.6742181094502975, 50.4983497918439, 50.4983497918439, 50.4983497918439, 50.4983497918439, 50.4983497918 439, 50.4983497918439, 50.4983497918439, 50.4983497918439, 50.4983497918439, 0.0]
    WARNING:tensorflow:From train_enet.py:142: slice_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
    Instructions for updating:
    Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(tuple(tensor_list)).shuffl e(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
    WARNING:tensorflow:From /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/input.py:372: ran ge_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
    Instructions for updating:
    Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.range(limit).shuffle(limit).repeat(num_epochs ). If shuffle=False, omit the .shuffle(...).
    WARNING:tensorflow:From /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/input.py:318: inp ut_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
    Instructions for updating:
    Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.s hape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
    WARNING:tensorflow:From /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/input.py:188: lim it_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
    Instructions for updating:
    Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensors(tensor).repeat(num_epochs).
    WARNING:tensorflow:From /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/input.py:197: Que ueRunner.init (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
    Instructions for updating:
    To construct input pipelines, use the tf.data module.
    WARNING:tensorflow:From /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/input.py:197: add _queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
    Instructions for updating:
    To construct input pipelines, use the tf.data module.
    WARNING:tensorflow:From train_enet.py:152: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
    Instructions for updating:
    Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.batch(batch_size) (or padded_batch(...) if dynamic_pad=True).
    inputs_shape [None, 400, 400, 3] Traceback (most recent call last): File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1628, in _create_c _op
    c_op = c_api.TF_FinishOperation(op_desc) tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 2 and 12 for 'mul' (op: 'Mul') with input shapes: [10,400,400,2], [12].

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "train_enet.py", line 338, in run() File "train_enet.py", line 170, in run loss = weighted_cross_entropy(logits=logits, onehot_labels=annotations_ohe, class_weights=class_weights) File "train_enet.py", line 128, in weighted_cross_entropy weights = onehot_labels * class_weights File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 878, in binary_op_w rapper
    return func(x, y, name=name) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1131, in _mul_dispa tch
    return gen_math_ops.mul(x, y, name=name) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 5042, in mul
    "Mul", x=x, y=y, name=name) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_fun c
    return func(*args, **kwargs) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op op_def=op_def) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1792, in init control_input_ops) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1631, in _create_c _op
    raise ValueError(str(e)) ValueError: Dimensions must be equal, but are 2 and 12 for 'mul' (op: 'Mul') with input shapes: [10,400,400,2], [12].

    opened by sunyongke 1
  • ScaterNd Error in train_enet.py file

    ScaterNd Error in train_enet.py file

    I am getting the following error on running the train_enet.py file in python=3.6 TensorFlow=1.12. Can someone help with this error?

    INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>, indices[172800] = [1, 91, 1, 0] does not index into shape [25,90,120,64]
    	 [[node ENet_1/unpool/ScatterNd (defined at /media/rohit/Data/water-segmentation/TensorFlow-ENet/enet.py:112)  = ScatterNd[T=DT_FLOAT, Tindices=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ENet_1/unpool/transpose, ENet_1/unpool/Reshape_2, ENet_1/unpool/ScatterNd/shape)]]
    
    Caused by op 'ENet_1/unpool/ScatterNd', defined at:
      File "train_enet.py", line 337, in <module>
        run()
      File "train_enet.py", line 234, in run
        skip_connections=skip_connections)
      File "/media/rohit/Data/water-segmentation/TensorFlow-ENet/enet.py", line 467, in ENet
        pooling_indices=pooling_indices_2, output_shape=inputs_shape_2, scope=bottleneck_scope_name+'_0')
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args
        return func(*args, **current_args)
      File "/media/rohit/Data/water-segmentation/TensorFlow-ENet/enet.py", line 324, in bottleneck
        net_unpool = unpool(net_unpool, pooling_indices, output_shape=output_shape, scope='unpool')
      File "/media/rohit/Data/water-segmentation/TensorFlow-ENet/enet.py", line 112, in unpool
        ret = tf.scatter_nd(indices, values, output_shape)
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 7077, in scatter_nd
        "ScatterNd", indices=indices, updates=updates, shape=shape, name=name)
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
        op_def=op_def)
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
        return func(*args, **kwargs)
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
        op_def=op_def)
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
        self._traceback = tf_stack.extract_stack()
    
    opened by Aakriti05 0
  • Strange in train_mean_IOU

    Strange in train_mean_IOU

    When I training my own dataset,i'm using tensorboard to supervise the trend of my training,I found that my training accuracy going higher and higher,but the training mIOU going lower and lower?These two lines have the opposite trend but keep pace. Could anyone else tell me why?Thanks a lot!

    opened by x7hkvip 0
Releases(v1.0)
Owner
Kwotsin
Software Engineer @ Snap Inc.
Kwotsin
PyTorch implementation of ''Background Activation Suppression for Weakly Supervised Object Localization''.

Background Activation Suppression for Weakly Supervised Object Localization PyTorch implementation of ''Background Activation Suppression for Weakly S

35 Jan 06, 2023
Code and models used in "MUSS Multilingual Unsupervised Sentence Simplification by Mining Paraphrases".

Multilingual Unsupervised Sentence Simplification Code and pretrained models to reproduce experiments in "MUSS: Multilingual Unsupervised Sentence Sim

Facebook Research 81 Dec 29, 2022
MOpt-AFL provided by the paper "MOPT: Optimized Mutation Scheduling for Fuzzers"

MOpt-AFL 1. Description MOpt-AFL is a AFL-based fuzzer that utilizes a customized Particle Swarm Optimization (PSO) algorithm to find the optimal sele

172 Dec 18, 2022
Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation

OSCAR Project Page | Paper This repository contains the codebase used in OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Ma

NVIDIA Research Projects 74 Dec 22, 2022
NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.

NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.

880 Jan 07, 2023
Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

Friederike Metz 7 Apr 23, 2022
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks

HiFiGAN Denoiser This is a Unofficial Pytorch implementation of the paper HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep F

Rishikesh (ऋषिकेश) 134 Dec 27, 2022
Continuous Security Group Rule Change Detection & Response at scale

Introduction Get notified of Security Group Changes across all AWS Accounts & Regions in an AWS Organization, with the ability to respond/revert those

Raajhesh Kannaa Chidambaram 3 Aug 13, 2022
PyTorch deep learning projects made easy.

PyTorch Template Project PyTorch deep learning project made easy. PyTorch Template Project Requirements Features Folder Structure Usage Config file fo

Victor Huang 3.8k Jan 01, 2023
Pytorch implementation of VAEs for heterogeneous likelihoods.

Heterogeneous VAEs Beware: This repository is under construction 🛠️ Pytorch implementation of different VAE models to model heterogeneous data. Here,

Adrián Javaloy 35 Nov 29, 2022
Rank 1st in the public leaderboard of ScanRefer (2021-03-18)

InstanceRefer InstanceRefer: Cooperative Holistic Understanding for Visual Grounding on Point Clouds through Instance Multi-level Contextual Referring

63 Dec 07, 2022
An official source code for paper Deep Graph Clustering via Dual Correlation Reduction, accepted by AAAI 2022

Dual Correlation Reduction Network An official source code for paper Deep Graph Clustering via Dual Correlation Reduction, accepted by AAAI 2022. Any

yueliu1999 109 Dec 23, 2022
Image Segmentation using U-Net, U-Net with skip connections and M-Net architectures

Brain-Image-Segmentation Segmentation of brain tissues in MRI image has a number of applications in diagnosis, surgical planning, and treatment of bra

Angad Bajwa 8 Oct 27, 2022
Rotated Box Is Back : Accurate Box Proposal Network for Scene Text Detection

Rotated Box Is Back : Accurate Box Proposal Network for Scene Text Detection This material is supplementray code for paper accepted in ICDAR 2021 We h

NCSOFT 30 Dec 21, 2022
a Lightweight library for sequential learning agents, including reinforcement learning

SaLinA: SaLinA - A Flexible and Simple Library for Learning Sequential Agents (including Reinforcement Learning) TL;DR salina is a lightweight library

Facebook Research 405 Dec 17, 2022
Pytorch implementation of Learning with Opponent-Learning Awareness

Pytorch implementation of Learning with Opponent-Learning Awareness using DiCE

Alexis David Jacq 82 Sep 15, 2022
Code release for ConvNeXt model

A ConvNet for the 2020s Official PyTorch implementation of ConvNeXt, from the following paper: A ConvNet for the 2020s. arXiv 2022. Zhuang Liu, Hanzi

Meta Research 4.6k Jan 08, 2023
code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification

On Robust Prefix-Tuning for Text Classification Prefix-tuning has drawed much attention as it is a parameter-efficient and modular alternative to adap

Zonghan Yang 12 Nov 30, 2022
CAPITAL: Optimal Subgroup Identification via Constrained Policy Tree Search

CAPITAL: Optimal Subgroup Identification via Constrained Policy Tree Search This repository is the official implementation of CAPITAL: Optimal Subgrou

Hengrui Cai 0 Oct 19, 2021
PyTorch implementation for our AAAI 2022 Paper "Graph-wise Common Latent Factor Extraction for Unsupervised Graph Representation Learning"

deepGCFX PyTorch implementation for our AAAI 2022 Paper "Graph-wise Common Latent Factor Extraction for Unsupervised Graph Representation Learning" Pr

Thilini Cooray 4 Aug 11, 2022