Code release for Hu et al., Learning to Segment Every Thing. in CVPR, 2018.

Overview

Learning to Segment Every Thing

This repository contains the code for the following paper:

  • R. Hu, P. Dollár, K. He, T. Darrell, R. Girshick, Learning to Segment Every Thing. in CVPR, 2018. (PDF)
@inproceedings{hu2018learning,
  title={Learning to Segment Every Thing},
  author={Hu, Ronghang and Dollár, Piotr and He, Kaiming and Darrell, Trevor and Girshick, Ross},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2018}
}

Project Page: http://ronghanghu.com/seg_every_thing

Note: this repository is built upon the Detectron codebase for object detection and segmentation (https://github.com/facebookresearch/Detectron), based on Detectron commit 3c4c7f67d37eeb4ab15a87034003980a1d259c94. Please see README_DETECTRON.md for details.

Installation

The installation procedure follows Detectron.

Please find installation instructions for Caffe2 and Detectron in INSTALL.md.

Note: all the experiments below run on 8 GPUs on a single machine. If you have less than 8 GPU available, please modify the yaml config files according to the linear scaling rule. For example, if you only have 4 GPUs, set NUM_GPUS to 4, downscale SOLVER.BASE_LR by 0.5x and multiply SOLVER.STEPS and SOLVER.MAX_ITER by 2x.

Part 1: Controlled Experiments on the COCO dataset

In this work, we explore our approach in two settings. First, we use the COCO dataset to simulate the partially supervised instance segmentation task as a means of establishing quantitative results on a dataset with high-quality annotations and evaluation metrics. Specifically, we split the full set of COCO categories into a subset with mask annotations and a complementary subset for which the system has access to only bounding box annotations. Because the COCO dataset involves only a small number (80) of semantically well-separated classes, quantitative evaluation is precise and reliable.

In our experiments, we split COCO into either

  • VOC Split: 20 PASCAL-VOC classes v.s. 60 non-PASCAL-VOC classes. We experiment with 1) VOC -> non-VOC, where set A={VOC} and 2) non-VOC -> VOC, where set A={non-VOC}.
  • Random Splits: randomly partitioned two subsets A and B of the 80 COCO classes.

and experiment with two training setups:

  • Stage-wise training, where first a Faster R-CNN detector is trained and kept frozen, and then the mask branch (including the weight transfer function) is added later.
  • End-to-end training, where the RPN, the box head, the mask head and the weight transfer function are trained together.

Please refer to Section 4 of our paper for details on the COCO experiments.

COCO Installation: To run the COCO experiments, first download the COCO dataset and install it according to the dataset guide.

Evaluation

The following experiments correspond to the results in Section 4.2 and Table 2 of our paper.

To run the experiments:

  1. Split the COCO dataset into VOC / non-VOC classes:
    python2 lib/datasets/bbox2mask_dataset_processing/coco/split_coco_dataset_voc_nonvoc.py.
  2. Set the training split using SPLIT variable:
  • To train on VOC -> non-VOC, where set A={VOC}, use export SPLIT=voc2nonvoc.
  • To train on non-VOC -> VOC, where set A={non-VOC}, use export SPLIT=nonvoc2voc.

Then use tools/train_net.py to run the following yaml config files for each experiment with ResNet-50-FPN backbone or ResNet-101-FPN backbone.

Please follow the instruction in GETTING_STARTED.md to train with the config files. The training scripts automatically test the trained models and print the bbox and mask APs on the VOC ('coco_split_voc_2014_minival') and non-VOC splits ('coco_split_nonvoc_2014_minival').

Using ResNet-50-FPN backbone:

  1. Class-agnostic (baseline): configs/bbox2mask_coco/${SPLIT}/eval_e2e/e2e_baseline.yaml
  2. MaskX R-CNN (ours, tansfer+MLP): configs/bbox2mask_coco/${SPLIT}/eval_e2e/e2e_clsbox_2_layer_mlp_nograd.yaml
  3. Fully-supervised (oracle): configs/bbox2mask_coco/oracle/e2e_mask_rcnn_R-50-FPN_1x.yaml

Using ResNet-101-FPN backbone:

  1. Class-agnostic (baseline): configs/bbox2mask_coco/${SPLIT}/eval_e2e_R101/e2e_baseline.yaml
  2. MaskX R-CNN (ours, tansfer+MLP): configs/bbox2mask_coco/${SPLIT}/eval_e2e_R101/e2e_clsbox_2_layer_mlp_nograd.yaml
  3. Fully-supervised (oracle): configs/bbox2mask_coco/oracle/e2e_mask_rcnn_R-101-FPN_1x.yaml

Ablation Study

This section runs ablation studies on the VOC Split (20 PASCAL-VOC classes v.s. 60 non-PASCAL-VOC classes) using ResNet-50-FPN backbone. The results correspond to Section 4.1 and Table 1 of our paper.

To run the experiments:

  1. (If you haven't done so in the above section) Split the COCO dataset into VOC / non-VOC classes:
    python2 lib/datasets/bbox2mask_dataset_processing/coco/split_coco_dataset_voc_nonvoc.py.
  2. For Study 1, 2, 3 and 5, download the pre-trained Faster R-CNN model with ResNet-50-FPN by running
    bash lib/datasets/data/trained_models/fetch_coco_faster_rcnn_model.sh.
    (Alternatively, you can train it yourself using configs/12_2017_baselines/e2e_faster_rcnn_R-50-FPN_1x.yaml and copy it to lib/datasets/data/trained_models/28594643_model_final.pkl.)
  3. For Study 1, add the GloVe and random embeddings of the COCO class names to the Faster R-CNN weights with
    python2 lib/datasets/bbox2mask_dataset_processing/coco/add_embeddings_to_weights.py.
  4. Set the training split using SPLIT variable:
  • To train on VOC -> non-VOC, where set A={VOC}, use export SPLIT=voc2nonvoc.
  • To train on non-VOC -> VOC, where set A={non-VOC}, use export SPLIT=nonvoc2voc.

Then use tools/train_net.py to run the following yaml config files for each experiment.

Study 1: Ablation on the input to the weight transfer function (Table 1a)

  • transfer w/ randn: configs/bbox2mask_coco/${SPLIT}/ablation_input/randn_2_layer.yaml
  • transfer w/ GloVe: configs/bbox2mask_coco/${SPLIT}/ablation_input/glove_2_layer.yaml
  • transfer w/ cls: configs/bbox2mask_coco/${SPLIT}/ablation_input/cls_2_layer.yaml
  • transfer w/ box: configs/bbox2mask_coco/${SPLIT}/ablation_input/box_2_layer.yaml
  • transfer w/ cls+box: configs/bbox2mask_coco/${SPLIT}/eval_sw/clsbox_2_layer.yaml
  • class-agnostic (baseline): configs/bbox2mask_coco/${SPLIT}/eval_sw/baseline.yaml
  • fully supervised (oracle): configs/bbox2mask_coco/oracle/mask_rcnn_frozen_features_R-50-FPN_1x.yaml

Study 2: Ablation on the structure of the weight transfer function (Table 1b)

  • transfer w/ 1-layer, none: configs/bbox2mask_coco/${SPLIT}/ablation_structure/clsbox_1_layer.yaml
  • transfer w/ 2-layer, ReLU: configs/bbox2mask_coco/${SPLIT}/ablation_structure/relu/clsbox_2_layer_relu.yaml
  • transfer w/ 2-layer, LeakyReLU: same as 'transfer w/ cls+box' in Study 1
  • transfer w/ 3-layer, ReLU: configs/bbox2mask_coco/${SPLIT}/ablation_structure/relu/clsbox_3_layer_relu.yaml
  • transfer w/ 3-layer, LeakyReLU: configs/bbox2mask_coco/${SPLIT}/ablation_structure/clsbox_3_layer.yaml

Study 3: Impact of the MLP mask branch (Table 1c)

  • class-agnostic: same as 'class-agnostic (baseline)' in Study 1
  • class-agnostic+MLP: configs/bbox2mask_coco/${SPLIT}/ablation_mlp/baseline_mlp.yaml
  • transfer: same as 'transfer w/ cls+box' in Study 1
  • transfer+MLP: configs/bbox2mask_coco/${SPLIT}/ablation_mlp/clsbox_2_layer_mlp.yaml

Study 4: Ablation on the training strategy (Table 1d)

  • class-agnostic + sw: same as 'class-agnostic (baseline)' in Study 1
  • transfer + sw: same as 'transfer w/ cls+box' in Study 1
  • class-agnostic + e2e: configs/bbox2mask_coco/${SPLIT}/eval_e2e/e2e_baseline.yaml
  • transfer + e2e: configs/bbox2mask_coco/${SPLIT}/ablation_e2e_stopgrad/e2e_clsbox_2_layer.yaml
  • transfer + e2e + stopgrad: configs/bbox2mask_coco/${SPLIT}/ablation_e2e_stopgrad/e2e_clsbox_2_layer_nograd.yaml

Study 5: Comparison of random A/B splits (Figure 3)

Note: this ablation study takes a HUGE amount of computation power. It consists of 50 training experiments (= 5 trials * 5 class-number in set A (20/30/40/50/60) * 2 settings (ours/baseline) ), and each training experiment takes approximately 9 hours to complete on 8 GPUs.

Before running Study 5:

  1. Split the COCO dataset into random class splits (This should take a while):
    python2 lib/datasets/bbox2mask_dataset_processing/coco/split_coco_dataset_randsplits.py.
  2. Set the training split using SPLIT variable (e.g. export SPLIT=E1_A20B60). The split has the format E%d_A%dB%d for example, E1_A20B60 is trial No. 1 with 20 random classes in set A and 60 random classes in set B. There are 5 trials (E1 to E5), with 20/30/40/50/60 random classes in set A (A20B60 to A60B20), yielding altogether 25 splits from E1_A20B60 to E5_A60B20.

Then use tools/train_net.py to run the following yaml config files for each experiment.

  • class-agnostic (baseline): configs/bbox2mask_coco/randsplits/eval_sw/${SPLIT}_baseline.yaml
  • tansfer w/ cls+box, 2-layer, LeakyReLU: configs/bbox2mask_coco/randsplits/eval_sw/${SPLIT}_clsbox_2_layer.yaml

Part 2: Large-scale Instance Segmentation on the Visual Genome dataset

In our second setting, we train a large-scale instance segmentation model on 3000 categories using the Visual Genome (VG) dataset. On the Visual Genome dataset, set A (w/ mask data) is the 80 COCO classes, while set B (w/o mask data, only bbox) is the remaining Visual Genome classes that are not in COCO.

Please refer to Section 5 of our paper for details on the Visual Genome experiments.

Inference

To run inference, download the pre-trained final model weights by running:
bash lib/datasets/data/trained_models/fetch_vg3k_final_model.sh
(Alternatively, you may train these weights yourself following the training section below.)

Then, use tools/infer_simple.py for prediction. Note: due to the large number of classes and the model loading overhead, prediction on the first image can take a while.

Using ResNet-50-FPN backbone:

python2 tools/infer_simple.py \
    --cfg configs/bbox2mask_vg/eval_sw/runtest_clsbox_2_layer_mlp_nograd.yaml \
    --output-dir /tmp/detectron-visualizations-vg3k \
    --image-ext jpg \
    --thresh 0.5 --use-vg3k \
    --wts lib/datasets/data/trained_models/33241332_model_final_coco2vg3k_seg.pkl \
    demo_vg3k

Using ResNet-101-FPN backbone:

python2 tools/infer_simple.py \
    --cfg configs/bbox2mask_vg/eval_sw_R101/runtest_clsbox_2_layer_mlp_nograd_R101.yaml \
    --output-dir /tmp/detectron-visualizations-vg3k-R101 \
    --image-ext jpg \
    --thresh 0.5 --use-vg3k \
    --wts lib/datasets/data/trained_models/33219850_model_final_coco2vg3k_seg.pkl \
    demo_vg3k

Training

Visual Genome Installation: To run the Visual Genome experiments, first download the Visual Genome dataset and install it according to the dataset guide. Then download the converted Visual Genome json dataset files (in COCO-format) by running:
bash lib/datasets/data/vg3k_bbox2mask/fetch_vg3k_json.sh.
(Alternatively, you may build the COCO-format json dataset files yourself using the scripts in lib/datasets/bbox2mask_dataset_processing/vg/)

Here, we adopt the stage-wise training strategy as mentioned in Section 5 of our paper. First in Stage 1, a Faster R-CNN detector is trained on all the 3k Visual Genome classes (set A+B). Then in Stage 2, the mask branch (with the weight transfer function) is added and trained on the mask data of the 80 COCO classes (set A). Finally, the mask branch is applied on all 3k Visual Genome classes (set A+B).

Before training on the mask data of the 80 COCO classes (set A) in Stage 2, a "surgery" is done to convert the 3k VG detection weights to 80 COCO detection weights, so that the mask branch only predicts mask outputs of the 80 COCO classes (as the weight transfer function only takes as input 80 classes) to save GPU memory. After training, another "surgery" is done to convert the 80 COCO detection weights back to the 3k VG detection weights.

To run the experiments, use tools/train_net.py to run the following yaml config files for each experiment with ResNet-50-FPN backbone or ResNet-101-FPN backbone.

Using ResNet-50-FPN backbone:

  1. Stage 1 (bbox training on 3k VG classes): run tools/train_net.py with configs/bbox2mask_vg/eval_sw/stage1_e2e_fast_rcnn_R-50-FPN_1x_1im.yaml
  2. Weights "surgery" 1: convert 3k VG detection weights to 80 COCO detection weights:
    python2 tools/vg3k_training/convert_coco_seg_to_vg3k.py --input_model /path/to/model_1.pkl --output_model /path/to/model_1_vg3k2coco_det.pkl
    where /path/to/model_1.pkl is the path to the final model trained in Stage 1 above.
  3. Stage 2 (mask training on 80 COCO classes): run tools/train_net.py with configs/bbox2mask_vg/eval_sw/stage2_cocomask_clsbox_2_layer_mlp_nograd.yaml
    IMPORTANT: when training Stage 2, set TRAIN.WEIGHTS to /path/to/model_1_vg3k2coco_det.pkl (the output of convert_coco_seg_to_vg3k.py) in tools/train_net.py.
  4. Weights "surgery" 2: convert 80 COCO detection weights back to 3k VG detection weights:
    python2 tools/vg3k_training/convert_vg3k_det_to_coco.py --input_model /path/to/model_2.pkl --output_model /path/to/model_2_coco2vg3k_seg.pkl
    where /path/to/model_2.pkl is the path to the final model trained in Stage 2 above. The output /path/to/model_2_coco2vg3k_seg.pkl can be used for VG 3k instance segmentation.

Using ResNet-101-FPN backbone:

  1. Stage 1 (bbox training on 3k VG classes): run tools/train_net.py with configs/bbox2mask_vg/eval_sw_R101/stage1_e2e_fast_rcnn_R-101-FPN_1x_1im.yaml
  2. Weights "surgery" 1: convert 3k VG detection weights to 80 COCO detection weights:
    python2 tools/vg3k_training/convert_coco_seg_to_vg3k.py --input_model /path/to/model_1.pkl --output_model /path/to/model_1_vg3k2coco_det.pkl
    where /path/to/model_1.pkl is the path to the final model trained in Stage 1 above.
  3. Stage 2 (mask training on 80 COCO classes): run tools/train_net.py with configs/bbox2mask_vg/eval_sw_R101/stage2_cocomask_clsbox_2_layer_mlp_nograd_R101.yaml
    IMPORTANT: when training Stage 2, set TRAIN.WEIGHTS to /path/to/model_1_vg3k2coco_det.pkl (the output of convert_coco_seg_to_vg3k.py) in tools/train_net.py.
  4. Weights "surgery" 2: convert 80 COCO detection weights back to 3k VG detection weights:
    python2 tools/vg3k_training/convert_vg3k_det_to_coco.py --input_model /path/to/model_2.pkl --output_model /path/to/model_2_coco2vg3k_seg.pkl
    where /path/to/model_2.pkl is the path to the final model trained in Stage 2 above. The output /path/to/model_2_coco2vg3k_seg.pkl can be used for VG 3k instance segmentation.

(Alternatively, you may skip Stage 1 and Weights "surgery" 1 by directly downloading the pre-trained VG 3k detection weights by running bash lib/datasets/data/trained_models/fetch_vg3k_faster_rcnn_model.sh, and leaving TRAIN.WEIGHTS to the specified values in the yaml configs in Stage 2.)

Owner
Ronghang Hu
Research Scientist, Facebook AI Research (FAIR)
Ronghang Hu
Hiiii this is the Spanish for Linux and win 10 and in the near future the english version of PortScan my new tool on which you can see what ports are Open only with the IP adress.

PortScanner-by-IIT PortScanner es una herramienta programada en Python3. Como su nombre indica esta herramienta escanea los primeros 150 puertos de re

5 Sep 19, 2022
This is the official PyTorch implementation of the paper "TransFG: A Transformer Architecture for Fine-grained Recognition" (Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille).

TransFG: A Transformer Architecture for Fine-grained Recognition Official PyTorch code for the paper: TransFG: A Transformer Architecture for Fine-gra

Ju He 307 Jan 03, 2023
QED-C: The Quantum Economic Development Consortium provides these computer programs and software for use in the fields of quantum science and engineering.

Application-Oriented Performance Benchmarks for Quantum Computing This repository contains a collection of prototypical application- or algorithm-cent

SRI International 67 Nov 30, 2022
Lightning Fast Language Prediction 🚀

whatthelang Lightning Fast Language Prediction 🚀 Dependencies The dependencies can be installed using the requirements.txt file: $ pip install -r req

Indix 152 Oct 16, 2022
Some codes from PyImageSearch course's and external projects.

👨‍💻 Some codes and projects 👨‍💻 💡 Technologies 📜 Projects 📍 Chrome Dinosaur Controller 📦 Script 📍 Coins Counter 📦 Script 🤓 Author Lucas Biv

Lucas Bivar 25 Oct 24, 2021
Pixel art search engine for opengameart

Pixel Art Reverse Image Search for OpenGameArt What does the final search look like? The final search with an example can be found here. It looks like

Eivind Magnus Hvidevold 92 Nov 06, 2022
Official PyTorch implementation for "Mixed supervision for surface-defect detection: from weakly to fully supervised learning"

Mixed supervision for surface-defect detection: from weakly to fully supervised learning [Computers in Industry 2021] Official PyTorch implementation

ViCoS Lab 169 Dec 30, 2022
Reference Code for AAAI-20 paper "Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labels"

Reference Code for AAAI-20 paper "Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labels" Please refer to htt

Ke Sun 1 Feb 14, 2022
This is a project to detect gestures to zoom in or out, using the real-time distance between the index finger and the thumb. It's based on OpenCV and Mediapipe.

Pinch-zoom This is a python project based on real-time hand-gesture detection, to zoom in or out, using the distance between the index finger and the

Harshit Bhalla 6 Jul 11, 2022
This is the open source implementation of the ICLR2022 paper "StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis"

StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image

Meta Research 840 Dec 26, 2022
pulse2percept: A Python-based simulation framework for bionic vision

pulse2percept: A Python-based simulation framework for bionic vision Retinal degenerative diseases such as retinitis pigmentosa and macular degenerati

67 Dec 29, 2022
Total Text Dataset. It consists of 1555 images with more than 3 different text orientations: Horizontal, Multi-Oriented, and Curved, one of a kind.

Total-Text-Dataset (Official site) Updated on April 29, 2020 (Detection leaderboard is updated - highlighted E2E methods. Thank you shine-lcy.) Update

Chee Seng Chan 671 Dec 27, 2022
A novel region proposal network for more general object detection ( including scene text detection ).

DeRPN: Taking a further step toward more general object detection DeRPN is a novel region proposal network which concentrates on improving the adaptiv

Deep Learning and Vision Computing Lab, SCUT 151 Dec 12, 2022
Pre-Recognize Library - library with algorithms for improving OCR quality.

PRLib - Pre-Recognition Library. The main aim of the library - prepare image for recogntion. Image processing can really help to improve recognition q

Alex 80 Dec 30, 2022
The open source extract transaction infomation by using OCR.

Transaction OCR Mã nguồn trích xuất thông tin transaction từ file scaned pdf, ở đây tôi lựa chọn tài liệu sao kê công khai của Thuy Tien. Mã nguồn có

Nguyen Xuan Hung 18 Jun 02, 2022
Source code of our TPAMI'21 paper Dual Encoding for Video Retrieval by Text and CVPR'19 paper Dual Encoding for Zero-Example Video Retrieval.

Dual Encoding for Video Retrieval by Text Source code of our TPAMI'21 paper Dual Encoding for Video Retrieval by Text and CVPR'19 paper Dual Encoding

81 Dec 01, 2022
Distilling Knowledge via Knowledge Review, CVPR 2021

ReviewKD Distilling Knowledge via Knowledge Review Pengguang Chen, Shu Liu, Hengshuang Zhao, Jiaya Jia This project provides an implementation for the

DV Lab 194 Dec 28, 2022
This repository summarized computer vision theories.

This repository summarized computer vision theories.

3 Feb 04, 2022
第一届西安交通大学人工智能实践大赛(2018AI实践大赛--图片文字识别)第一名;仅采用densenet识别图中文字

OCR 第一届西安交通大学人工智能实践大赛(2018AI实践大赛--图片文字识别)冠军 模型结果 该比赛计算每一个条目的f1score,取所有条目的平均,具体计算方式在这里。这里的计算方式不对一句话里的相同文字重复计算,故f1score比提交的最终结果低: - train val f1score 0

尹畅 441 Dec 22, 2022
Image augmentation library in Python for machine learning.

Augmentor is an image augmentation library in Python for machine learning. It aims to be a standalone library that is platform and framework independe

Marcus D. Bloice 4.8k Jan 04, 2023