Official re-implementation of the Calibrated Adversarial Refinement model described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Overview

Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Python 3.7 PyTorch 1.4 Apache

Official PyTorch implementation of the Calibrated Adversarial Refinement models described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation accepted at ICCV2021. An overview of the model architecture is depicted below. We show ambiguous boundary segmentation as a use case, where blue and red pixels in the input image are separable by different vertical boundaries, resulting in multiple valid labels.

image

Results on the stochastic version of the Cityscapes dataset are shown below. The leftmost column illustrates input images overlaid with ground truth labels, the middle section shows 8 randomly sampled predictions from the refinement network, and the final column shows aleatoric uncertainty maps extracted from the calibration network.

image image image

The code reproducing the illustrative toy regression example presented in Section 5.1. of the paper can be found in this repository.

Getting Started

Prerequisites

  • Python3
  • NVIDIA GPU + CUDA CuDNN

This was tested an Ubuntu 18.04 system, on a single 16GB Tesla V100 GPU, but might work on other operating systems as well.

Setup virtual environment

To install the requirements for this code run:

python3 -m venv ~/carsss_venv
source ~/carsss_venv/bin/activate
pip install -r requirements.txt

Directory tree

.
├── data
│   └── datasets
│       ├── lidc
│       └── cityscapes
│ 
├── models
│   ├── discriminators
│   ├── general
│   ├── generators
│   │   └── calibration_nets
│   └── losses
│        
├── results
│        └── output
│        
├── testing
│        
├── training
│        
└── utils

Datasets

For the 1D regression dataset experiments, please refer to this repository. Information on how to obtain the stochastic semantic segmentation datasets can be found below.

Download the LIDC dataset

The pre-processed 180x180 2D crops for the Lung Image Database Consortium (LIDC) image collection dataset (LIDC-IDRI) , as described in A Hierarchical Probabilistic U-Net for Modeling Multi-Scale Ambiguities (2019) and used in this work is made publicly available from Khol et. al, and can be downloaded from (here).

After downloading the dataset, extract each file under ./data/datasets/lidc/. This should give three folders under the said directory named: lidc_crops_test, lidc_crops_train, and lidc_crops_test.

Please note that the official repository of the Hierarchical Probabilistic U-Net , the version of the dataset linked above containts 8843 images for training, 1993 for validation and 1980 for testing rather than 8882, 1996 and 1992 images as used in our experiments, however, the score remains the same.

Download the pre-processed Cityscapes dataset with the black-box predictions

As described in our paper, we integrate our model on top of a black-box segmentation network. We used a pre-trained DeepLabV3+(Xception65+ASPP) model publicly available here . We found that this model obtains a mIoU score of 0.79 on the official test-set of the Cityscapes dataset (Cityscapes).

To get the official 19-class Cityscapes dataset:

  1. Visit the Cityscapes website and create an account
  2. Download the images and annotations
  3. Extract the files and move the folders gtFine and leftImg8bit in a new directory for the raw data i.e. ./data/datasets/cityscapes/raw_data.
  4. Create the 19-class labels by following this issue.
  5. Configure your data directories in ./data/datasets/cityscapes/preprocessing_config.py .
  6. Run ./data/datasets/cityscapes/preprocessing.py to pre-process the data in downscaled numpy arrays and save under ./data/datasets/cityscapes/processed.

Subsequently download the black-box predictions under ./data/datasets/cityscapes/, and extract by running tar -zxvf cityscapes_bb_preds.tar.gz

Finally, move the black-box predictions in the processed cityscapes folder and setup the test set run ./data/datasets/cityscapes/move_bb_preds.py

Train your own models

To train you own model on the LIDC dataset, set LABELS_CHANNELS=2 in line 29 of ./utils/constants.py run:

python main.py --mode train --debug '' --calibration_net SegNetCalNet --z_dim 8 --batch-size 32 --dataset LIDC --class_flip ''

To train you own model using the black-box predictions on the modified Cityscapes dataset, set LABELS_CHANNELS=25 in line 29 of ./utils/constants.py and run:

python main.py --mode train --debug '' --calibration_net ToyCalNet --z_dim 32 --batch-size 16 --dataset CITYSCAPES19 --class_flip True

Launching a run in train mode will create a new directory with the date and time of the start of your run under ./results/output/, where plots documenting the progress of the training and are saved and models are checkpointed. For example, a run launched on 12:00:00 on 1/1/2020 will create a new folder ./results/output/2020-01-01_12:00:00/ . To prevent the creation of this directory, set --debug False in the run command above.

Evaluation

LIDC pre-trained model

A pre-trained model on LIDC can be downloaded from here. To evaluate this model set LABELS_CHANNELS=2, move the downloaded pickle file under ./results/output/LIDC/saved_models/ and run:

python main.py --mode test --test_model_date LIDC --test_model_suffix LIDC_CAR_Model --calibration_net SegNetCalNet --z_dim 8 --dataset LIDC --class_flip ''

Cityscapes pre-trained model

A pre-trained model on the modified Cityscapes dataset can be downloaded from here. To evaluate this model set LABELS_CHANNELS=25 and IMSIZE = (256, 512) in ./utils/constants.py, move the downloaded pickle file under ./results/output/CS/saved_models/ and run:

python main.py --mode test --test_model_date CS --test_model_suffix CS_CAR_Model --calibration_net ToyCalNet --z_dim 32 --dataset CITYSCAPES19 --class_flip True

Citation

If you use this code for your research, please cite our paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation:

@InProceedings{Kassapis_2021_ICCV,
    author    = {Kassapis, Elias and Dikov, Georgi and Gupta, Deepak K. and Nugteren, Cedric},
    title     = {Calibrated Adversarial Refinement for Stochastic Semantic Segmentation},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {7057-7067}
}

License

The code in this repository is published under the Apache License Version 2.0.

Owner
Elias Kassapis
MSc in Artificial Intelligence from the University of Amsterdam | BSc in Neuroscience from the University of Edinburgh
Elias Kassapis
Emotion Recognition from Facial Images

Reconhecimento de Emoções a partir de imagens faciais Este projeto implementa um classificador simples que utiliza técncias de deep learning e transfe

Gabriel 2 Feb 09, 2022
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
Awesome-google-colab - Google Colaboratory Notebooks and Repositories

Unofficial Google Colaboratory Notebook and Repository Gallery Please contact me to take over and revamp this repo (it gets around 30k views and 200k

Derek Snow 1.2k Jan 03, 2023
Gans-in-action - Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks

GANs in Action by Jakub Langr and Vladimir Bok List of available code: Chapter 2: Colab, Notebook Chapter 3: Notebook Chapter 4: Notebook Chapter 6: C

GANs in Action 914 Dec 21, 2022
PyTorch and Tensorflow functional model definitions

functional-zoo Model definitions and pretrained weights for PyTorch and Tensorflow PyTorch, unlike lua torch, has autograd in it's core, so using modu

Sergey Zagoruyko 590 Dec 22, 2022
Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation

VT-UNet This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet. Environmen

Himashi Amanda Peiris 114 Dec 20, 2022
A light-weight image labelling tool for Python designed for creating segmentation data sets.

An image labelling tool for creating segmentation data sets, for Django and Flask.

117 Nov 21, 2022
Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

VANET Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning" Introduction This is the implementation of article VAN

EMDATA-AILAB 23 Dec 26, 2022
pytorch implementation of fast-neural-style

fast-neural-style 🌇 🚀 NOTICE: This codebase is no longer maintained, please use the codebase from pytorch examples repository available at pytorch/e

Abhishek Kadian 405 Dec 15, 2022
This is an official implementation of "Polarized Self-Attention: Towards High-quality Pixel-wise Regression"

Polarized Self-Attention: Towards High-quality Pixel-wise Regression This is an official implementation of: Huajun Liu, Fuqiang Liu, Xinyi Fan and Don

DeLightCMU 212 Jan 08, 2023
Neural Turing Machines (NTM) - PyTorch Implementation

PyTorch Neural Turing Machine (NTM) PyTorch implementation of Neural Turing Machines (NTM). An NTM is a memory augumented neural network (attached to

Guy Zana 519 Dec 21, 2022
Human Action Controller - A human action controller running on different platforms.

Human Action Controller (HAC) Goal A human action controller running on different platforms. Fun Easy-to-use Accurate Anywhere Fun Examples Mouse Cont

27 Jul 20, 2022
A python library for time-series smoothing and outlier detection in a vectorized way.

tsmoothie A python library for time-series smoothing and outlier detection in a vectorized way. Overview tsmoothie computes, in a fast and efficient w

Marco Cerliani 517 Dec 28, 2022
Image-to-Image Translation in PyTorch

CycleGAN and pix2pix in PyTorch New: Please check out contrastive-unpaired-translation (CUT), our new unpaired image-to-image translation model that e

Jun-Yan Zhu 19k Jan 07, 2023
Adversarial Reweighting for Partial Domain Adaptation

Adversarial Reweighting for Partial Domain Adaptation Code for paper "Xiang Gu, Xi Yu, Yan Yang, Jian Sun, Zongben Xu, Adversarial Reweighting for Par

12 Dec 01, 2022
Java and SHACL code commented in the paper "Towards compliance checking in reified I/O logic via SHACL" submitted to ICAIL 2021

shRIOL The subfolder shRIOL contains Java files to execute the SHACL files on the OWL ontology. To compile the Java files: "javac -cp ./src/;./lib/* -

1 Dec 06, 2022
Official PyTorch implementation for paper "Efficient Two-Stage Detection of Human–Object Interactions with a Novel Unary–Pairwise Transformer"

UPT: Unary–Pairwise Transformers This repository contains the official PyTorch implementation for the paper Frederic Z. Zhang, Dylan Campbell and Step

Frederic Zhang 109 Dec 20, 2022
A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

1 Dec 25, 2021
[CVPRW 2021] Code for Region-Adaptive Deformable Network for Image Quality Assessment

RADN [CVPRW 2021] Code for Region-Adaptive Deformable Network for Image Quality Assessment [Paper on arXiv] Overview Update [2021/5/7] add codes for W

IIGROUP 53 Dec 28, 2022
PyTorch implementation of 'Gen-LaneNet: a generalized and scalable approach for 3D lane detection'

(pytorch) Gen-LaneNet: a generalized and scalable approach for 3D lane detection Introduction This is a pytorch implementation of Gen-LaneNet, which p

Yuliang Guo 233 Jan 06, 2023