TensorFlow implementation of ENet

Overview

TensorFlow-ENet

TensorFlow implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation.

This model was tested on the CamVid dataset with street scenes taken from Cambridge, UK. For more information on this dataset, please visit: http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/.

Requirements: TensorFlow >= r1.2

Visualizations

Note that the gifs may be out of sync if the network doesn't load them together. You can refresh your page to see them in sync.

Test Dataset Output

CamVid Test Dataset Output CamVid Test Dataset Output

TensorBoard Visualizations

Execute tensorboard --logdir=log on your root directory to monitor your training and watch your segmentation output form against the ground truth and the original image as you train your model.

Contents

Code

  • enet.py: The ENet model definition, including the argument scope.

  • train_enet.py: The file for training. Includes saving of images for visualization and tunable hyperparameters.

  • test_enet.py: The file for evaluating on the test dataset. Includes option to visualize images as well.

  • preprocessing.py: The preprocessing does just image resizing, just in case anyone wants to use a smaller image size due to memory issues or for other datasets.

  • predict_segmentation.py: Obtains the segmentation output for visualization purposes. You can create your own gif with these outputs.

  • get_class_weights.py: The file to obtain either the median frequency balancing class weights, or the custom ENet function class weights.

  • train.sh: Example training script to train the different variations of the model.

  • test.sh Example testing script to test the different variants you trained.

Folders

  • dataset: Contains 6 folders that holds the original train-val-test images and their corresponding ground truth annotations.

  • checkpoint: The checkpoint directory that could be used for predicting the segmentation output. The model was trained using the default parameters mentioned in the paper, except that it uses median frequency balancing to obtain the class weights. The final checkpoint model size is under 5MB.

  • visualizations: Contains the gif files that were created from the output of predict_segmentation.py.

Important Notes

  1. As the Max Unpooling layer is not officially available from TensorFlow, a manual implementation was used to build the decoder portion of the network. This was based on the implementation suggested in this TensorFlow github issue.

  2. Batch normalization and 2D Spatial Dropout are still retained during testing for good performance.

  3. Class weights are used to tackle the problem of imbalanced classes, as certain classes appear more dominantly than others. More notably, the background class has weight of 0.0, in order to not reward the model for predicting background.

  4. On the labels and colouring scheme: The dataset consists of only 12 labels, with the road-marking class merged with the road class. The last class is the unlabelled class.

  5. No preprocessing is done to the images for ENet. (see references below on clarifications with author).

  6. Once you've fine-tuned to get your best hyperparameters, there's an option to combine the training and validation datasets together. However, if your training dataset is large enough, this won't make a lot of difference.

Implementation and Architectural Changes

  1. Skip connections can be added to connect the corresponding encoder and decoder portions for better performance.

  2. The number of initial blocks and the depth of stage 2 residual bottlenecks are tunable hyperparameters. This allows you to build a deeper network if required, since ENet is rather lightweight.

  3. Fused batch normalization is used over standard batch normalization for faster computations. See TensorFlow's best practices.

  4. To obtain the class weights for computing the weighted loss, Median Frequency Balancing (MFB) is used by default instead of the custom ENet class weighting function. This is due to an observation that MFB gives a slightly better performance than the custom function, at least on my machine. However, the option of using the ENet custom class weights is still possible.

References

  1. ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation
  2. Implementation of Max Unpooling
  3. Implementation of PReLU
  4. Clarifications from ENet author
  5. Original Torch implementation of ENet
  6. ResNet paper for clarification on residual bottlenecks
  7. Colouring scheme

Feedback and Bugs

This implementation may not be entirely correct and may contain bugs. It would be great if the open source community can spot any bugs and raise a github issue/submit a pull request to fix those bugs if any!

Citation

If you are using this work for your research, please consider citing:

@misc{kwot_sin_lee_2017_3403269,
  author       = {Kwot Sin Lee},
  title        = {kwotsin/TensorFlow-ENet: DOI},
  month        = jun,
  year         = 2017,
  doi          = {10.5281/zenodo.3403269},
  url          = {https://doi.org/10.5281/zenodo.3403269}
}

DOI

Comments
  • Adding the input output node to freeze the graph for only inference purpose

    Adding the input output node to freeze the graph for only inference purpose

    I was trying to freeze the graph however you are using tensorflow input pipeline instead of a placeholder. Could you please explain, how to remove input pipeline and add a node for reading input image?

    Thanks!

    opened by chandrakantkhandelwal 17
  • Porting this model to serve it in Android

    Porting this model to serve it in Android

    Hi @kwotsin This is brilliant work. Thanks for sharing this. I want to freeze this model and serve it through an android app. Can it be done? Do you have any pointers as to where to start from..

    opened by anandcu3 11
  • Problem occurred~

    Problem occurred~

    InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'MaxPoolWithArgmax' with these attrs. Registered devices: [CPU], Registered kernels: device='GPU'; Targmax in [DT_INT64]; T in [DT_DOUBLE] device='GPU'; Targmax in [DT_INT64]; T in [DT_FLOAT] device='GPU'; Targmax in [DT_INT64]; T in [DT_HALF]

     [[Node: ENet_1/bottleneck1_0_main_max_pool = MaxPoolWithArgmax[T=DT_FLOAT, Targmax=DT_INT64, ksize=[1, 2, 2, 1], padding="SAME", strides=[1, 2, 2, 1]](ENet_1/initial_block_1_concat)]]
    
    opened by pandamax 5
  • Error when train with new data

    Error when train with new data

    I tried to train the Enet with own data but it caused a error. It is a problem with FIFOQueue. Our data have 715 images for training and 105 for validation and 100 for testing.

    Can you help me to figure out this problem. Thanks a lot. Anh

    
    2017-08-15 16:48:45.882942: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    2017-08-15 16:48:45.883176: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    2017-08-15 16:48:45.883198: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    2017-08-15 16:48:45.883214: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    2017-08-15 16:48:45.883861: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    2017-08-15 16:48:45.886415: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: FIFOQueue '_1_batch/fifo_queue' is closed and has insufficient elements (requested 5, current size 0)
    	 [[Node: batch = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, batch/n)]]
    Traceback (most recent call last):
      File "train_enet.py", line 357, in <module>
        run()
      File "train_enet.py", line 172, in run
        with slim.arg_scope(ENet_arg_scope(weight_decay=weight_decay)):
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/contextlib.py", line 35, in __exit__
        self.gen.throw(type, value, traceback)
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 964, in managed_session
        self.stop(close_summary_writer=close_summary_writer)
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 792, in stop
        stop_grace_period_secs=self._stop_grace_secs)
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/training/coordinator.py", line 389, in join
        six.reraise(*self._exc_info_to_raise)
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/training/queue_runner_impl.py", line 238, in _run
        enqueue_callable()
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1063, in _single_operation_run
        target_list_as_strings, status, None)
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/contextlib.py", line 24, in __exit__
        self.gen.next()
      File "/home/leducanh/.pyenv/versions/anaconda3-2.5.0/envs/tensorflow120/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
        pywrap_tensorflow.TF_GetCode(status))
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape mismatch in tuple component 1. Expected [202,360,1], got [202,360,3]
    	 [[Node: batch/fifo_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_FLOAT, DT_UINT8], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch/fifo_queue, Squeeze/_2975, Squeeze_1/_2977)]]
    
    
    opened by ducanh841988 5
  • performance issue about inference (0.5~1 fps)

    performance issue about inference (0.5~1 fps)

    hi, Kwotsin Thank you so much for the implementation! I trained the model with my data and got much better mIOU comparing to legacy method.

    However, the inference performance is not good as the stats from the paper. My result is about 0.5~1 fps for 1280x720 image, on GeForce GTX 1080. My class number is 2 so it might save much MIPS. I run the inference with OpenCV real-time images.

    Any idea about the root cause?

    thanks a lot, BRS, Dell

    opened by DellBrother 3
  • OOM when allocating tensor with shape[10,45,60,128]

    OOM when allocating tensor with shape[10,45,60,128]

    Caused by op 'ENet/Relu_67', defined at: File "D:/Python/code/Semantic Segmentation/TensorFlow-ENet-master/train_enet.py", line 337, in run() File "D:/Python/code/Semantic Segmentation/TensorFlow-ENet-master/train_enet.py", line 162, in run skip_connections=skip_connections) File "D:\Python\code\Semantic Segmentation\TensorFlow-ENet-master\enet.py", line 455, in ENet net = bottleneck(net, output_depth=128, filter_size=3, dilated=True, dilation_rate=16, scope='bottleneck'+str(i)+'_8') File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 181, in func_with_args return func(*args, **current_args) File "D:\Python\code\Semantic Segmentation\TensorFlow-ENet-master\enet.py", line 264, in bottleneck net = prelu(net, scope=scope+'_prelu4') File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 181, in func_with_args return func(*args, **current_args) File "D:\Python\code\Semantic Segmentation\TensorFlow-ENet-master\enet.py", line 35, in prelu pos = tf.nn.relu(x) File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 2272, in relu result = _op_def_lib.apply_op("Relu", features=features, name=name) File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op op_def=op_def) File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2506, in create_op original_op=self._default_original_op, op_def=op_def) File "C:\ProgramData\Anaconda2\envs\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1269, in init self._traceback = _extract_stack()

    ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[10,45,60,128]

    opened by louiskoo520 3
  • The accuracy is low when i train on my own dataset

    The accuracy is low when i train on my own dataset

    I want to use this model for my own dataset(classes 2:car and backgroud) However the accuracy is so low with the loss falling the mean_iou is falling 。the model is not work for this dataset
    what is the problem?

    opened by jzx-gooner 2
  • How to use predict for real-time webcam?

    How to use predict for real-time webcam?

    Hi, Thank you for providing a good code.

    I have a one question.

    Can I use predict code in real time in a webcam? If there is a way, can you let me know?

    Regard.

    opened by zmqp111 2
  • please help me, tell me how to predict the video frame

    please help me, tell me how to predict the video frame

    I modefied the code file:predict_segmentation.py to predict the video frame, but there are problems, please help me , Thanks ! here is the error: +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ python pre_video.py /home/yh/.conda/envs/yh/lib/python3.6/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters Traceback (most recent call last): File "pre_video.py", line 122, in skip_connections=skip_connections) File "/home/yh/Work/TensorFlow-ENet/enet.py", line 464, in ENet pooling_indices=pooling_indices_2, output_shape=inputs_shape_2, scope=bottleneck_scope_name+'_0') File "/home/yh/.conda/envs/yh/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 183, in func_with_args return func(*args, **current_args) File "/home/yh/Work/TensorFlow-ENet/enet.py", line 321, in bottleneck net_unpool = unpool(net_unpool, pooling_indices, output_shape=output_shape, scope='unpool') File "/home/yh/Work/TensorFlow-ENet/enet.py", line 101, in unpool y = mask // (output_shape[2] * output_shape[3]) TypeError: unsupported operand type(s) for *: 'NoneType' and 'int' +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    here is my modefied code:

    import tensorflow as tf import os import matplotlib.pyplot as plt from enet import ENet, ENet_arg_scope from preprocessing import preprocess from scipy.misc import imsave import numpy as np import cv2 import time

    slim = tf.contrib.slim

    #指定GPU os.environ["CUDA_VISIBLE_DEVICES"]="1"

    W = 500 H = 400

    batch_size = 1 num_classes = 2

    path = '/home/yh/data/dataset/zhijia/' filename = 'day.avi' #'20170920_124050_872.avi' cap = cv2.VideoCapture(path+filename) #size=(W,H) fourcc = cv2.VideoWriter_fourcc('M','J','P','G') #opencv3.0

    checkpoint_dir = "./checkpoint_mfb" checkpoint = tf.train.latest_checkpoint(checkpoint_dir)

    num_initial_blocks = 1 skip_connections = False stage_two_repeat = 2 ''' #Labels to colours are obtained from here: https://github.com/alexgkendall/SegNet-Tutorial/blob/c922cc4a4fcc7ce279dd998fb2d4a8703f34ebd7/Scripts/test_segmentation_camvid.py

    However, the road_marking class is collapsed into the road class in the dataset provided.

    Classes:

    Sky = [128,128,128] Building = [128,0,0] Pole = [192,192,128] Road_marking = [255,69,0] Road = [128,64,128] Pavement = [60,40,222] Tree = [128,128,0] SignSymbol = [192,128,128] Fence = [64,64,128] Car = [64,0,128] Pedestrian = [64,64,0] Bicyclist = [0,128,192] Unlabelled = [0,0,0] ''' label_to_colours = {0: [0,0,0], 1: [0,128,0], 2: [192,192,128], 3: [128,64,128], 4: [60,40,222], 5: [128,128,0], 6: [192,128,128], 7: [64,64,128], 8: [64,0,128], 9: [64,64,0], 10: [0,128,192], 11: [128,128,128]}

    #Create the photo directory photo_dir = checkpoint_dir + "/test_images" if not os.path.exists(photo_dir): os.mkdir(photo_dir)

    def vis_segmentation(image, seg_map): """Visualizes input image, segmentation map and overlay view."""

    image_width, image_height = image.size
    colored_label = label_to_color_image(seg_map).astype(np.uint8)
    image_empty = np.zeros((image_height,2*image_width,3),np.uint8)
    image_empty[:image_height,:image_width] = image.copy()
    image_empty[:image_height,image_width:] = colored_label.copy()
    image_empty[:image_height,:image_width] = image.copy()
    image_empty[:image_height,image_width:] = colored_label.copy()
    
    alpha = 0.35
    beta = 1-alpha
    gamma = 0
    img_add = cv2.addWeighted(np.array(image), alpha, seg_map, beta, gamma)
    return img_add
    

    #Create a function to convert each pixel label to colour. def grayscale_to_colour(image): print('Converting image...') image = image.reshape((H, W, 1)) image = np.repeat(image, 3, axis=-1) for i in range(image.shape[0]): for j in range(image.shape[1]): label = int(image[i][j][0]) image[i][j] = np.array(label_to_colours[label])

    return image
    

    def model_run(image):

    return predictions   
    

    with tf.Graph().as_default() as graph:

    image_tensor = tf.placeholder(tf.float32, [None, None, 3])
    images = tf.expand_dims(image_tensor,0)
    
    #Create the model inference
    with slim.arg_scope(ENet_arg_scope()):
        logits, probabilities = ENet(images,
                                     num_classes=num_classes,
                                     batch_size=batch_size,
                                     is_training=False,
                                     reuse=None,
                                     num_initial_blocks=num_initial_blocks,
                                     stage_two_repeat=stage_two_repeat,
                                     skip_connections=skip_connections)
    
    variables_to_restore = slim.get_variables_to_restore()
    saver = tf.train.Saver(variables_to_restore)
    def restore_fn(sess):
        return saver.restore(sess, checkpoint)
    
    predictions = tf.argmax(probabilities, -1)
    predictions = tf.cast(predictions, tf.float32)
    print('HERE', predictions.get_shape())
    
    sv = tf.train.Supervisor(logdir=None, init_fn=restore_fn)
    
    with sv.managed_session() as sess:
         now = 0.0
         while(cap.isOpened()):
              ret, frame = cap.read()
              if (ret == False):
                  print('~~~~~~~~~~~~~~~~~~did not get any frame~~~~~~~~~~~~~~~~~~')
                  break  
              image = frame.copy()             
              image = np.asarray(image, np.float32)/255           
              print('~~~~~~~~~~~~~~',sess.run(image_tensor))
              segmentations = sess.run(predictions, feed_dict={image_tensor:image})
    
              #cv2.imshow('pre',segmentations[0])
              #cv2.waitKey(0)
    
              #T = time.time() - now
              #print(int(1/T))
              #now = time.time()
    

    cap.release() cv2.destroyAllWindows()

    ===============================================================================

    opened by electronicYH 2
  • Using the network on 4-Channel Images

    Using the network on 4-Channel Images

    I'm trying to use the network to train on 4-Channel Images. The changes in the code are

    tf.image.decode_image(image, channels=4) in train_enet.py

    and changed the number of channels in preprocessing.py. Now when I try to train the network I get the following error

    tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot reshape a tensor with 734400 elements to shape [4,172800] (691200 elements) for 'ENet/unpool_1/Reshape_1' (op: 'Reshape') with input shapes: [4,1,90,120,17], [2] and with input tensors computed as partial shapes: input[1] = [4,172800].

    Traceback (most recent call last):
      File "train_enet.py", line 337, in <module>
     run()
      File "train_enet.py", line 162, in run
    	skip_connections=skip_connections)
      File "enet.py", line 476, in ENet
    	pooling_indices=pooling_indices_1, output_shape=inputs_shape_1, scope=bottleneck_scope_name+'_0')
      File "Anaconda3\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 181, in func_with_args
    	return func(*args, **current_args)
      File "enet.py", line 321, in bottleneck
    	net_unpool = unpool(net_unpool, pooling_indices, output_shape=output_shape, scope='unpool')
      File "enet.py", line 108, in unpool
    	indices = tf.transpose(tf.reshape(tf.stack([b, y, x, f]), [4, updates_size]))
      File "Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 2451, in reshape
    	name=name)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
    	op_def=op_def)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2508, in create_op
    	set_shapes_for_outputs(ret)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1873, in set_shapes_for_outputs
    	shapes = shape_func(op)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1823, in call_with_requiring
    	return call_cpp_shape_fn(op, require_shape_fn=True)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 610, in call_cpp_shape_fn
    	debug_python_shape_fn, require_shape_fn)
      File "Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 676, in _call_cpp_shape_fn_impl
    	raise ValueError(err.message)
    ValueError: Cannot reshape a tensor with 734400 elements to shape [4,172800] (691200 elements) for 'ENet/unpool_1/Reshape_1' (op: 'Reshape') with input shapes: [4,1,90,120,17], [2] and with input tensors computed as partial shapes: input[1] = [4,172800].
    

    Do I need to make any other changes ? Will this network even work for 4-channel images?

    opened by anandcu3 2
  • LookupError: No gradient defined for operation 'ENet/bottleneck2_0_main_max_pool' (op type: MaxPoolWithArgmax)

    LookupError: No gradient defined for operation 'ENet/bottleneck2_0_main_max_pool' (op type: MaxPoolWithArgmax)

    When I run ./train.sh, I find this error. Is this because of the Tensorflow version or I need to define the gradient of ENet/bottleneck2_0_main_max_pool myself? Thx!

    opened by zhangxgu 2
  • demo code running problem

    demo code running problem

    Hello, when I using your demo code for training, I met a problem : Caused by op u'ENet/bottleneck1_0_conv3/Conv2D', defined at: File "train_enet.py", line 341, in run() File "train_enet.py", line 162, in run skip_connections=skip_connections) File "/home/yt/TensorFlow-ENet/enet.py", line 433, in ENet net, pooling_indices_1, inputs_shape_1 = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01, downsampling=True, scope='bottleneck1_0') File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args return func(*args, **current_args) File "/home/yt/TensorFlow-ENet/enet.py", line 223, in bottleneck net = slim.conv2d(net, output_depth, [1,1], scope=scope+'_conv3') File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args return func(*args, **current_args) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1057, in convolution outputs = layer.apply(inputs) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 762, in apply return self.call(inputs, *args, **kwargs) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 652, in call outputs = self.call(inputs, *args, **kwargs) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/layers/convolutional.py", line 167, in call outputs = self._convolution_op(inputs, self.kernel) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 838, in call return self.conv_op(inp, filter) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 502, in call return self.call(inp, filter) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 190, in call name=self.name) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 639, in conv2d data_format=data_format, dilations=dilations, name=name) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3160, in create_op op_def=op_def) File "/home/yt/miniconda3/envs/enet_py2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1625, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

    InternalError (see above for traceback): Blas SGEMM launch failed : m=108000, n=64, k=4 [[Node: ENet/bottleneck1_0_conv3/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](ENet/add_2, ENet/bottleneck1_0_conv3/weights/read)]] [[Node: Adam/update_ENet/bottleneck2_5_batch_norm1/beta/ApplyAdam/_8812 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_17711_Adam/update_ENet/bottleneck2_5_batch_norm1/beta/ApplyAdam", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

    my environment setup is: 2080ti tensorflow-gpu=1.5.0 python=2.7 can you please help me? and where to indicated which gpu to run the training code, I didn't find it

    opened by lapetite123 0
  • ValueError: Dimensions must be equal when trained my dataset

    ValueError: Dimensions must be equal when trained my dataset

    I runed the demo code, it perfect and no errors.

    but shows errors when using my own dataset.

    the image size is 400x400, the annotated image has same size, the background is 0, the target eara is 1.

    (py36) [[email protected] TensorFlow-ENet]$ python train_enet.py --weighting="ENET" --num_epochs=300 --logdir="./log/train_original_E Net_knot" --num_classes=2 --image_width=400 --image_height=400
    ========= ENet Class Weights =========
    [1.8298322321992648, 3.6742181094502975, 50.4983497918439, 50.4983497918439, 50.4983497918439, 50.4983497918439, 50.4983497918 439, 50.4983497918439, 50.4983497918439, 50.4983497918439, 50.4983497918439, 0.0]
    WARNING:tensorflow:From train_enet.py:142: slice_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
    Instructions for updating:
    Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(tuple(tensor_list)).shuffl e(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
    WARNING:tensorflow:From /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/input.py:372: ran ge_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
    Instructions for updating:
    Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.range(limit).shuffle(limit).repeat(num_epochs ). If shuffle=False, omit the .shuffle(...).
    WARNING:tensorflow:From /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/input.py:318: inp ut_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
    Instructions for updating:
    Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.s hape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
    WARNING:tensorflow:From /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/input.py:188: lim it_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
    Instructions for updating:
    Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensors(tensor).repeat(num_epochs).
    WARNING:tensorflow:From /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/input.py:197: Que ueRunner.init (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
    Instructions for updating:
    To construct input pipelines, use the tf.data module.
    WARNING:tensorflow:From /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/input.py:197: add _queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
    Instructions for updating:
    To construct input pipelines, use the tf.data module.
    WARNING:tensorflow:From train_enet.py:152: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
    Instructions for updating:
    Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.batch(batch_size) (or padded_batch(...) if dynamic_pad=True).
    inputs_shape [None, 400, 400, 3] Traceback (most recent call last): File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1628, in _create_c _op
    c_op = c_api.TF_FinishOperation(op_desc) tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 2 and 12 for 'mul' (op: 'Mul') with input shapes: [10,400,400,2], [12].

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "train_enet.py", line 338, in run() File "train_enet.py", line 170, in run loss = weighted_cross_entropy(logits=logits, onehot_labels=annotations_ohe, class_weights=class_weights) File "train_enet.py", line 128, in weighted_cross_entropy weights = onehot_labels * class_weights File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 878, in binary_op_w rapper
    return func(x, y, name=name) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1131, in _mul_dispa tch
    return gen_math_ops.mul(x, y, name=name) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 5042, in mul
    "Mul", x=x, y=y, name=name) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_fun c
    return func(*args, **kwargs) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op op_def=op_def) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1792, in init control_input_ops) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1631, in _create_c _op
    raise ValueError(str(e)) ValueError: Dimensions must be equal, but are 2 and 12 for 'mul' (op: 'Mul') with input shapes: [10,400,400,2], [12].

    opened by sunyongke 1
  • ScaterNd Error in train_enet.py file

    ScaterNd Error in train_enet.py file

    I am getting the following error on running the train_enet.py file in python=3.6 TensorFlow=1.12. Can someone help with this error?

    INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>, indices[172800] = [1, 91, 1, 0] does not index into shape [25,90,120,64]
    	 [[node ENet_1/unpool/ScatterNd (defined at /media/rohit/Data/water-segmentation/TensorFlow-ENet/enet.py:112)  = ScatterNd[T=DT_FLOAT, Tindices=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ENet_1/unpool/transpose, ENet_1/unpool/Reshape_2, ENet_1/unpool/ScatterNd/shape)]]
    
    Caused by op 'ENet_1/unpool/ScatterNd', defined at:
      File "train_enet.py", line 337, in <module>
        run()
      File "train_enet.py", line 234, in run
        skip_connections=skip_connections)
      File "/media/rohit/Data/water-segmentation/TensorFlow-ENet/enet.py", line 467, in ENet
        pooling_indices=pooling_indices_2, output_shape=inputs_shape_2, scope=bottleneck_scope_name+'_0')
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args
        return func(*args, **current_args)
      File "/media/rohit/Data/water-segmentation/TensorFlow-ENet/enet.py", line 324, in bottleneck
        net_unpool = unpool(net_unpool, pooling_indices, output_shape=output_shape, scope='unpool')
      File "/media/rohit/Data/water-segmentation/TensorFlow-ENet/enet.py", line 112, in unpool
        ret = tf.scatter_nd(indices, values, output_shape)
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 7077, in scatter_nd
        "ScatterNd", indices=indices, updates=updates, shape=shape, name=name)
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
        op_def=op_def)
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
        return func(*args, **kwargs)
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
        op_def=op_def)
      File "/home/rohit/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
        self._traceback = tf_stack.extract_stack()
    
    opened by Aakriti05 0
  • Strange in train_mean_IOU

    Strange in train_mean_IOU

    When I training my own dataset,i'm using tensorboard to supervise the trend of my training,I found that my training accuracy going higher and higher,but the training mIOU going lower and lower?These two lines have the opposite trend but keep pace. Could anyone else tell me why?Thanks a lot!

    opened by x7hkvip 0
Releases(v1.0)
Owner
Kwotsin
Software Engineer @ Snap Inc.
Kwotsin
Matching python environment code for Lux AI 2021 Kaggle competition, and a gym interface for RL models.

Lux AI 2021 python game engine and gym This is a replica of the Lux AI 2021 game ported directly over to python. It also sets up a classic Reinforceme

Geoff McDonald 74 Nov 03, 2022
A High-Performance Distributed Library for Large-Scale Bundle Adjustment

MegBA: A High-Performance and Distributed Library for Large-Scale Bundle Adjustment This repo contains an official implementation of MegBA. MegBA is a

旷视研究院 3D 组 336 Dec 27, 2022
This repository contains the source code for the paper Tutorial on amortized optimization for learning to optimize over continuous domains by Brandon Amos

Tutorial on Amortized Optimization This repository contains the source code for the paper Tutorial on amortized optimization for learning to optimize

Meta Research 144 Dec 26, 2022
GitHub repository for "Improving Video Generation for Multi-functional Applications"

Improving Video Generation for Multi-functional Applications GitHub repository for "Improving Video Generation for Multi-functional Applications" Pape

Bernhard Kratzwald 328 Dec 07, 2022
*ObjDetApp* deploys a pytorch model for object detection

*ObjDetApp* deploys a pytorch model for object detection

Will Chao 1 Dec 26, 2021
Code repo for realtime multi-person pose estimation in CVPR'17 (Oral)

Realtime Multi-Person Pose Estimation By Zhe Cao, Tomas Simon, Shih-En Wei, Yaser Sheikh. Introduction Code repo for winning 2016 MSCOCO Keypoints Cha

Zhe Cao 4.9k Dec 31, 2022
A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data

A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data Overview Clustering analysis is widely utilized in single-cell RNA-seque

AI-Biomed @NSCC-gz 3 May 08, 2022
A repository for the paper "Improved Adversarial Systems for 3D Object Generation and Reconstruction".

Improved Adversarial Systems for 3D Object Generation and Reconstruction: This is a repository for the paper "Improved Adversarial Systems for 3D Obje

Edward Smith 188 Dec 25, 2022
This is the face keypoint train code of project face-detection-project

face-key-point-pytorch 1. Data structure The structure of landmarks_jpg is like below: |--landmarks_jpg |----AFW |------AFW_134212_1_0.jpg |------AFW_

I‘m X 3 Nov 27, 2022
Creating a custom CNN hypertunned architeture for the Fashion MNIST dataset with Python, Keras and Tensorflow.

custom-cnn-fashion-mnist Creating a custom CNN hypertunned architeture for the Fashion MNIST dataset with Python, Keras and Tensorflow. The following

Danielle Almeida 1 Mar 05, 2022
This repository includes code of my study about Asynchronous in Frequency domain of GAN images.

Exploring the Asynchronous of the Frequency Spectra of GAN-generated Facial Images Binh M. Le & Simon S. Woo, "Exploring the Asynchronous of the Frequ

4 Aug 06, 2022
Framework that uses artificial intelligence applied to mathematical models to make predictions

LiconIA Framework that uses artificial intelligence applied to mathematical models to make predictions Interface Overview Table of contents [TOC] 1 Ar

4 Jun 20, 2021
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models

DSEE Codes for [Preprint] DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Ch

VITA 4 Dec 27, 2021
LSTM Neural Networks for Spectroscopic Studies of Type Ia Supernovae

Package Description The difficulties in acquiring spectroscopic data have been a major challenge for supernova surveys. snlstm is developed to provide

7 Oct 11, 2022
DeepProbLog is an extension of ProbLog that integrates Probabilistic Logic Programming with deep learning by introducing the neural predicate.

DeepProbLog DeepProbLog is an extension of ProbLog that integrates Probabilistic Logic Programming with deep learning by introducing the neural predic

KU Leuven Machine Learning Research Group 94 Dec 18, 2022
Markov Attention Models

Introduction This repo contains code for reproducing the results in the paper Graphical Models with Attention for Context-Specific Independence and an

Vicarious 0 Dec 09, 2021
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)

This repository contains code to reproduce results for submission NeurIPS 2021, "Momentum Centering and Asynchronous Update for Adaptive Gradient Meth

Juntang Zhuang 15 Jun 11, 2022
AutoVideo: An Automated Video Action Recognition System

AutoVideo is a system for automated video analysis. It is developed based on D3M infrastructure, which describes machine learning with generic pipeline languages. Currently, it focuses on video actio

Data Analytics Lab at Texas A&M University 267 Dec 17, 2022
Streamlit app demonstrating an image browser for the Udacity self-driving-car dataset with realtime object detection using YOLO.

Streamlit Demo: The Udacity Self-driving Car Image Browser This project demonstrates the Udacity self-driving-car dataset and YOLO object detection in

Streamlit 992 Jan 04, 2023