Weakly Supervised Segmentation with Tensorflow. Implements instance segmentation as described in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).

Overview

Weakly Supervised Segmentation with TensorFlow

This repo contains a TensorFlow implementation of weakly supervised instance segmentation as described in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).

The idea behind weakly supervised segmentation is to train a model using cheap-to-generate label approximations (e.g., bounding boxes) as substitute/guiding labels for computer vision classification tasks that usually require very detailed labels. In semantic labelling, each image pixel is assigned to a specific class (e.g., boat, car, background, etc.). In instance segmentation, all the pixels belonging to the same object instance are given the same instance ID.

Per [2014a], pixelwise mask annotations are far more expensive to generate than object bounding box annotations (requiring up to 15x more time). Some models, like Simply Does It (SDI) [2016a] claim they can use a weak supervision approach to reach 95% of the quality of the fully supervised model, both for semantic labelling and instance segmentation.

Simple Does It (SDI)

Experimental Setup for Instance Segmentation

In weakly supervised instance segmentation, there are no pixel-wise annotations (i.e., no segmentation masks) that can be used to train a model. Yet, we aim to train a model that can still predict segmentation masks by only being given an input image and bounding boxes for the objects of interest in that image.

The masks used for training are generated starting from individual object bounding boxes. For each annotated bounding box, we generate a segmentation mask using the GrabCut method (although, any other method could be used), and train a convnet to regress from the image and bounding box information to the instance segmentation mask.

Note that in the original paper, a more sophisticated segmenter is used (M∩G+).

Network

SDI validates its work repurposing two different instance segmentation architectures (DeepMask [2015a] and DeepLab2 VGG-16 [2016b]). Here we use the OSVOS FCN (See section 3.1 of [2016c]).

Setup

The code in this repo was developed and tested using Anaconda3 v.4.4.0. To reproduce our conda environment, please use the following files:

On Ubuntu:

On Windows:

Jupyter Notebooks

The recommended way to test this implementation is to use the following jupyter notebooks:

  • VGG16 Net Surgery: The weakly supervised segmentation techniques presented in the "Simply Does It" paper use a backbone convnet (either DeepLab or VGG16 network) pre-trained on ImageNet. This pre-trained network takes RGB images as an input (W x H x 3). Remember that the weakly supervised version is trained using 4-channel inputs: RGB + a binary mask with a filled bounding box of the object instance. Therefore, we need to perform net surgery and create a 4-channel input version of the VGG16 net, initialized with the 3-channel parameter values except for the additional convolutional filters (we use Gaussian initialization for them).
  • "Simple Does It" Grabcut Training for Instance Segmentation: This notebook performs training of the SDI Grabcut weakly supervised model for instance segmentation. Following the instructions provided in Section "6. Instance Segmentation Results" of the "Simple Does It" paper, we use the Berkeley-augmented Pascal VOC segmentation dataset that provides per-instance segmentation masks for VOC2012 data. The Berkley augmented dataset can be downloaded from here. Again, the SDI Grabcut training is done using a 4-channel input VGG16 network pre-trained on ImageNet, so make sure to run the VGG16 Net Surgery notebook first!
  • "Simple Does It" Weakly Supervised Instance Segmentation (Testing): The sample results shown in the notebook come from running our trained model on the validation split of the Berkeley-augmented dataset.

Link to Pre-trained model and BK-VOC data files

The pre-processed BK-VOC dataset, "grabcut" segmentations, and results as well as pre-trained models (vgg_16_4chan_weak.ckpt-50000) can be found here:

If you'd rather download the Berkeley-augmented Pascal VOC segmentation dataset that provides per-instance segmentation masks for VOC2012 data from its origin, click here. Then, execute lines similar to these lines in dataset.py to generate the intermediary files used by this project:

if __name__ == '__main__':
    dataset = BKVOCDataset()
    dataset.prepare()

Make sure to set the paths at the top of dataset.py to the correct location:

if sys.platform.startswith("win"):
    _BK_VOC_DATASET = "E:/datasets/bk-voc/benchmark_RELEASE/dataset"
else:
    _BK_VOC_DATASET = '/media/EDrive/datasets/bk-voc/benchmark_RELEASE/dataset'

Training

The fully supervised version of the instance segmentation network whose performance we're trying to match is trained using the RGB images as inputs. The weakly supervised version is trained using 4-channel inputs: RGB + a binary mask with a filled bounding box of the object instance. In the latter case, the same RGB image may appear in several input samples (as many times as there are object instances associated with that RGB image).

To be clear, the output labels used for training are NOT user-provided detailed groundtruth annotations. There are no such groundtruths in the weakly supervised scenario. Instead, the labels are the segmentation masks generated using the GrabCut+ method. The weakly supoervised model is trained to regress from an image and bounding box information to a generated segmentation mask.

Testing

The sample results shown here come from running our trained model on the validation split of the Berkeley-augmented dataset (see the testing notebook). Below, we (very) subjectively categorize them as "pretty good" and "not so great".

Pretty good

Not so great

References

2016

  • [2016a] Khoreva et al. 2016. Simple Does It: Weakly Supervised Instance and Semantic Segmentation. [arXiv] [web]
  • [2016b] Chen et al. 2016. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. [arXiv]
  • [2016c] Caelles et al. 2016. OSVOS: One-Shot Video Object Segmentation. [arXiv]

2015

  • [2015a] Pinheiro et al. 2015. DeepMask: Learning to Segment Object Candidates. [arXiv]

2014

  • [2014a] Lin et al. 2014. Microsoft COCO: Common Objects in Context. [arXiv] [web]
Owner
Phil Ferriere
Former Microsoft Development Lead passionate about Deep Learning with a focus on Computer Vision.
Phil Ferriere
Used to record WKU's utility bills on a regular basis.

WKU水电费小助手 一个用于定期记录WKU水电费的脚本 Looking for English Readme? 背景 由于WKU校园内的水电账单系统时常存在扣费延迟的现象,而补扣的费用缺乏令人信服的证明。不少学生为费用摸不着头脑,但也没有申诉的依据。为了更好地掌握水电费使用情况,留下一手证据,我开源

2 Jul 21, 2022
PyTorch Lightning + Hydra. A feature-rich template for rapid, scalable and reproducible ML experimentation with best practices. ⚡🔥⚡

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

Łukasz Zalewski 2.1k Jan 09, 2023
Use graph-based analysis to re-classify stocks and to improve Markowitz portfolio optimization

Dynamic Stock Industrial Classification Use graph-based analysis to re-classify stocks and experiment different re-classification methodologies to imp

Sheng Yang 10 Dec 05, 2022
A python implementation of Yolov5 to detect fire or smoke in the wild in Jetson Xavier nx and Jetson nano

yolov5-fire-smoke-detect-python A python implementation of Yolov5 to detect fire or smoke in the wild in Jetson Xavier nx and Jetson nano You can see

20 Dec 15, 2022
Implementation for the EMNLP 2021 paper "Interactive Machine Comprehension with Dynamic Knowledge Graphs".

Interactive Machine Comprehension with Dynamic Knowledge Graphs Implementation for the EMNLP 2021 paper. Dependencies apt-get -y update apt-get instal

Xingdi (Eric) Yuan 19 Aug 23, 2022
Original Pytorch Implementation of FLAME: Facial Landmark Heatmap Activated Multimodal Gaze Estimation

FLAME Original Pytorch Implementation of FLAME: Facial Landmark Heatmap Activated Multimodal Gaze Estimation, accepted at the 17th IEEE Internation Co

Neelabh Sinha 19 Dec 17, 2022
Discovering Explanatory Sentences in Legal Case Decisions Using Pre-trained Language Models.

Statutory Interpretation Data Set This repository contains the data set created for the following research papers: Savelka, Jaromir, and Kevin D. Ashl

17 Dec 23, 2022
RoIAlign & crop_and_resize for PyTorch

RoIAlign for PyTorch This is a PyTorch version of RoIAlign. This implementation is based on crop_and_resize and supports both forward and backward on

Long Chen 530 Jan 07, 2023
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
NeuroGen: activation optimized image synthesis for discovery neuroscience

NeuroGen: activation optimized image synthesis for discovery neuroscience NeuroGen is a framework for synthesizing images that control brain activatio

3 Aug 17, 2022
Kalidokit is a blendshape and kinematics solver for Mediapipe/Tensorflow.js face, eyes, pose, and hand tracking models

Blendshape and kinematics solver for Mediapipe/Tensorflow.js face, eyes, pose, and hand tracking models.

Rich 4.5k Jan 07, 2023
Implementation for NeurIPS 2021 Submission: SparseFed

READ THIS FIRST This repo is an anonymized version of an existing repository of GitHub, for the AIStats 2021 submission: SparseFed: Mitigating Model P

2 Jun 15, 2022
Unified tracking framework with a single appearance model

Paper: Do different tracking tasks require different appearance model? [ArXiv] (comming soon) [Project Page] (comming soon) UniTrack is a simple and U

ZhongdaoWang 300 Dec 24, 2022
DGCNN - Dynamic Graph CNN for Learning on Point Clouds

DGCNN is the author's re-implementation of Dynamic Graph CNN, which achieves state-of-the-art performance on point-cloud-related high-level tasks including category classification, semantic segmentat

Wang, Yue 1.3k Dec 26, 2022
This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation.

ISL This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation, which is accepted

19 May 04, 2022
keyframes-CNN-RNN(action recognition)

keyframes-CNN-RNN(action recognition) Environment: python=3.7 pytorch=1.2 Datasets: Following the format of UCF101 action recognition. Run steps: Mo

4 Feb 09, 2022
PyTorch implementation of a collections of scalable Video Transformer Benchmarks.

PyTorch implementation of Video Transformer Benchmarks This repository is mainly built upon Pytorch and Pytorch-Lightning. We wish to maintain a colle

Xin Ma 156 Jan 08, 2023
The code for 'Deep Residual Fourier Transformation for Single Image Deblurring'

Deep Residual Fourier Transformation for Single Image Deblurring Xintian Mao, Yiming Liu, Wei Shen, Qingli Li and Yan Wang code will be released soon

145 Dec 13, 2022
Context-Aware Image Matting for Simultaneous Foreground and Alpha Estimation

Context-Aware Image Matting for Simultaneous Foreground and Alpha Estimation This is the inference codes of Context-Aware Image Matting for Simultaneo

Qiqi Hou 125 Oct 22, 2022
Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning. Code will be available soon.

Official-PyTorch-Implementation-of-TransMEF Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fu

117 Dec 27, 2022