DilatedNet in Keras for image segmentation

Overview

Keras implementation of DilatedNet for semantic segmentation

A native Keras implementation of semantic segmentation according to Multi-Scale Context Aggregation by Dilated Convolutions (2016). Optionally uses the pretrained weights by the authors'.

The code has been tested on Tensorflow 1.3, Keras 1.2, and Python 3.6.

Using the pretrained model

Download and extract the pretrained model:

curl -L https://github.com/nicolov/segmentation_keras/releases/download/model/nicolov_segmentation_model.tar.gz | tar xvf -

Install dependencies and run:

pip install -r requirements.txt
# For GPU support
pip install tensorflow-gpu==1.3.0

python predict.py --weights_path conversion/converted/dilation8_pascal_voc.npy

The output image will be under images/cat_seg.png.

Converting the original Caffe model

Follow the instructions in the conversion folder to convert the weights to the TensorFlow format that can be used by Keras.

Training

Download the Augmented Pascal VOC dataset here:

curl -L http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz | tar -xvf -

This will create a benchmark_RELEASE directory in the root of the repo. Use the convert_masks.py script to convert the provided masks in .mat format to RGB pngs:

python convert_masks.py \
    --in-dir benchmark_RELEASE/dataset/cls \
    --out-dir benchmark_RELEASE/dataset/pngs

Start training:

python train.py --batch-size 2

Model checkpoints are saved under trained/, and can be used with the predict.py script for testing.

The training code is currently limited to the frontend module, and thus only outputs 16x16 segmentation maps. The augmentation pipeline does mirroring but not cropping or rotation.


Fisher Yu and Vladlen Koltun, Multi-Scale Context Aggregation by Dilated Convolutions, 2016

Comments
  • training and validation loss nan

    training and validation loss nan

    First of all I just want to thank you for the great work. I am having an issue during training, my loss and val_loss is nan, however I am still getting values for accuracy and val_acc. I am training on the PASCAL_VOC 2012 dataset with the segmentation class pngs. rsz_screenshot_from_2017-08-09_17-24-13

    keras 1.2.1 & 2.0.6 tensorflow-gpu 1.2.1 python 3.6.1

    opened by Barfknecht 9
  • Fine tuning ...

    Fine tuning ...

    Hello,

    You have provided the pre-trained model of VOC. I have a small dataset with 2 classes, which I annotated based on VOC and I want to fine-tune it. Would you please guide me through the process?

    opened by MyVanitar 8
  • Modifying number of class

    Modifying number of class

    Hi Nicolov,

    Thanks for the great work! I tried to train new dataset by generating my own set of jpg and png masks. However I realized it only works for pre-defined 20 classes. For example I wanted to re-train this network to segment screws from background, I wasn't able to find way to add new classes but to use a existed color 0x181818 which was originally trained for cats. After training it did segmented the screw. However I'm still wondering is there any way to change the number of classes and specify which color value are associated with certain class?

    opened by francisbitontistudio 7
  • Black image after segmentation

    Black image after segmentation

    Hi! I have val accuracy = 1, but when i am trying to predict mask on the image from train set it displays me black image. Does anybody know what is the reason of this behaviour?

    opened by dimaxano 7
  • docker running error

    docker running error

    Hi, @nicolov ,

    For the caffe weight conversion, I got the following error:

    (tf_1.0) [email protected]:/data/code/segmentation_keras/conversion# docker run -v $(pwd):/workspace -ti `docker build -q .`
    Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    "docker run" requires at least 1 argument(s).
    See 'docker run --help'.
    
    Usage:  docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
    
    Run a command in a new container
    (tf_1.0) ro[email protected]:/data/code/segmentation_keras/conversion#
    
    

    It shows that docker daemon is not running. Any other command should I input before it?

    Thanks

    opened by amiltonwong 7
  • the way of loading the weight

    the way of loading the weight

    Hi nicolov,

    In the post, you explained how to do the weight conversion. Due to the development environment constraints, it is a little bit hard for me to follow exactly your step.

    In keras blog, author also show a way to load VGG16 weight from Keras directly. Do you think this weight can be used for your implementation? Do we have to use the converted caffe model weight for pascal_voc. The data set I will be using is of different domain with the data set published in the paper. Thanks for your advice.

    capture

    opened by wenouyang 5
  • Problems with CuDNN library

    Problems with CuDNN library

    While running train.py, this is the error message: Epoch 1/20 E tensorflow/stream_executor/cuda/cuda_dnn.cc:378] Loaded runtime CuDNN library: 6021 (compatibility version 6000) but source was compiled with 5105 (compatibility version 5100). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.

    Since I don't have the root account, I can't install CuDNN v5. Do you know how I can fix this? Thanks!

    opened by Yuren-Zhong 4
  • IoU results

    IoU results

    Have you by any chance compared this to the original implementation with regards to the mean IoU? If so, what implementation of IoU did you use and what were your results?

    opened by Barfknecht 4
  • about the required pre-trained vgg model

    about the required pre-trained vgg model

    Hi, @nicolov ,

    According to this line, vgg_conv.npy is needed as pre-trained vgg model in training. Could you list the download location for corresponding caffemodel and prototxt file? And, is the conversion step the same as here?

    Thanks!

    opened by amiltonwong 4
  • regarding loading_weights

    regarding loading_weights

    Hi nicolov,

    In the train.py, you have included the function of load_weights(model, weights_path):. My understanding is that you are trying to load a pre-training vcg model. If I do not want to use this pretrained model because the problem I am working one may belong to a totally different domain, should I just skip calling this load_weights function? Or using a pre-trained model is always preferable, I am kind of confusing about this.

    In the notes, you mentioned that The training code is currently limited to the frontend module, and thus only outputs 16x16 segmentation maps. If I would like to leverage this code for my own data set, what are the modifications that I have to make? Do I still have to load the weights?

    Thank you very much!

    opened by wenouyang 4
  • Cannot locate Dockerfile: Dockerfile

    Cannot locate Dockerfile: Dockerfile

    Probably a rookie error but when I am trying to run the conversion step in conversion by running the docker I get the following error:

    $sudo docker run -v $(pwd):/workspace -ti `docker build -q .`
    time="2017-02-09T09:15:11-08:00" level=fatal msg="Cannot locate Dockerfile: Dockerfile" 
    docker: "run" requires a minimum of 1 argument. See 'docker run --help'.
    
    opened by mongoose54 4
  • Training freezes

    Training freezes

    On executing command: python train.py --batch-size 2 ,training freezes at last step of first epoch.

    All the libraries are according to the requirement.txt file

    opened by ghost 1
  • AtrousConvolution2D vs.Conv2DTranspose

    AtrousConvolution2D vs.Conv2DTranspose

    Hi @nicolov I was wondering whether in your model, you wouldn't need to have a Conv2DTranspose or Upsample layer to compensate for the maxpool and obtain predictions with the same size as your input image?

    opened by tinalegre 0
  • How to handle high resolution  images

    How to handle high resolution images

    Hello @nicolov ,

    let me first express my appreciation to your work in image segmentation its great (Y)

    small suggestion , i just want to notify you that there is a missing -- in input parsing . very minor change

    parser.add_argument('--input_path', nargs='?', default='images/cat.jpg',
                            help='Required path to input image') 
    

    I'm hoping you can help me in understanding how to handle high res images as 1028 and 4k ,

    also in the code i found you set input_width, input_height = 900, 900 and label_margin = 186 can you please illustrate what is the reason for this static number and how they effect on the output high and width

    output_height = input_height - 2 * label_margin
    output_width = input_width - 2 * label_margin
    
    opened by engahmed1190 2
  • Context module training implementation plans

    Context module training implementation plans

    Thanks for creating this implementation. Do you have any plans to implement training of the context module (to allow producing full resolution segmentation maps)?

    opened by OliverColeman 3
  • palette conversion not needed

    palette conversion not needed

    https://github.com/nicolov/segmentation_keras/blob/master/convert_masks.py isn't necessary.

    Just use Pillow and you can load the classes separately from the color palette, which means it will already be in the format you want!

    from https://github.com/aurora95/Keras-FCN/blob/master/utils/SegDataGenerator.py#L203

                    from PIL import Image
                    label = Image.open(label_filepath)
                    if self.save_to_dir and self.palette is None:
                        self.palette = label.palette
    

    cool right?

    opened by ahundt 6
Releases(caffemodel)
TAug :: Time Series Data Augmentation using Deep Generative Models

TAug :: Time Series Data Augmentation using Deep Generative Models Note!!! The package is under development so be careful for using in production! Fea

35 Dec 06, 2022
An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax

Simple Transformer An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax. Note: The only ex

29 Jun 16, 2022
PyTorch code accompanying the paper "Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning" (NeurIPS 2021).

HIGL This is a PyTorch implementation for our paper: Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning (NeurIPS 2021). Our cod

Junsu Kim 20 Dec 14, 2022
TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection

TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection; Accepted by ICCV2021. Note: The complete code (including training and t

S.X.Zhang 84 Dec 13, 2022
Pytorch Lightning 1.2k Jan 06, 2023
Implementation of Sequence Generative Adversarial Nets with Policy Gradient

SeqGAN Requirements: Tensorflow r1.0.1 Python 2.7 CUDA 7.5+ (For GPU) Introduction Apply Generative Adversarial Nets to generating sequences of discre

Lantao Yu 2k Dec 29, 2022
Uses Open AI Gym environment to create autonomous cryptocurrency bot to trade cryptocurrencies.

Crypto_Bot Uses Open AI Gym environment to create autonomous cryptocurrency bot to trade cryptocurrencies. Steps to get started using the bot: Sign up

21 Oct 03, 2022
Pcos-prediction - Predicts the likelihood of Polycystic Ovary Syndrome based on patient attributes and symptoms

PCOS Prediction 🥼 Predicts the likelihood of Polycystic Ovary Syndrome based on

Samantha Van Seters 1 Jan 10, 2022
Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.

VQGAN-CLIP-GENERATOR Overview This is a package (with available notebook) for running VQGAN+CLIP locally, with a focus on ease of use, good documentat

Ryan Hamilton 98 Dec 30, 2022
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP

CLIP-GEN [简体中文][English] 本项目在萤火二号集群上用 PyTorch 实现了论文 《CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP》。 CLIP-GEN 是一个 Language-F

75 Dec 29, 2022
Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression

Regression Transformer Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression . Development se

International Business Machines 27 Jan 05, 2023
Distributionally robust neural networks for group shifts

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization This code implements the g

151 Dec 25, 2022
Image Data Augmentation in Keras

Image data augmentation is a technique that can be used to artificially expand the size of a training dataset by creating modified versions of images in the dataset.

Grace Ugochi Nneji 3 Feb 15, 2022
A free, multiplatform SDK for real-time facial motion capture using blendshapes, and rigid head pose in 3D space from any RGB camera, photo, or video.

mocap4face by Facemoji mocap4face by Facemoji is a free, multiplatform SDK for real-time facial motion capture based on Facial Action Coding System or

Facemoji 591 Dec 27, 2022
Certis - Certis, A High-Quality Backtesting Engine

Certis - Backtesting For y'all Certis is a powerful, lightweight, simple backtes

Yeachan-Heo 46 Oct 30, 2022
KIND: an Italian Multi-Domain Dataset for Named Entity Recognition

KIND (Kessler Italian Named-entities Dataset) KIND is an Italian dataset for Named-Entity Recognition. It contains more than one million tokens with t

Digital Humanities 5 Jun 21, 2022
통일된 DataScience 폴더 구조 제공 및 가상환경 작업의 부담감 해소

Lucas coded by linux shell 목차 Mac버전 CookieCutter (autoenv) 1.How to Install autoenv 2.폴더 진입 시, activate 구현하기 3.폴더 탈출 시, deactivate 구현하기 4.Alias 설정하기 5

ello 3 Feb 21, 2022
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
A treasure chest for visual recognition powered by PaddlePaddle

简体中文 | English PaddleClas 简介 飞桨图像识别套件PaddleClas是飞桨为工业界和学术界所准备的一个图像识别任务的工具集,助力使用者训练出更好的视觉模型和应用落地。 近期更新 2021.11.1 发布PP-ShiTu技术报告,新增饮料识别demo 2021.10.23 发

4.6k Dec 31, 2022
Cross-Document Coreference Resolution

Cross-Document Coreference Resolution This repository contains code and models for end-to-end cross-document coreference resolution, as decribed in ou

Arie Cattan 29 Nov 28, 2022