Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Related tags

Deep LearningMTAF
Overview

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Paper

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation
Antoine Saporta, Tuan-Hung Vu, Matthieu Cord, Patrick Pérez
valeo.ai, France
IEEE International Conference on Computer Vision (ICCV), 2021 (Poster)

If you find this code useful for your research, please cite our paper:

@inproceedings{saporta2021mtaf,
  title={Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation},
  author={Saporta, Antoine and Vu, Tuan-Hung and Cord, Mathieu and P{\'e}rez, Patrick},
  booktitle={ICCV},
  year={2021}
}

Abstract

In this work, we address the task of unsupervised domain adaptation (UDA) for semantic segmentation in presence of multiple target domains: The objective is to train a single model that can handle all these domains at test time. Such a multi-target adaptation is crucial for a variety of scenarios that real-world autonomous systems must handle. It is a challenging setup since one faces not only the domain gap between the labeled source set and the unlabeled target set, but also the distribution shifts existing within the latter among the different target domains. To this end, we introduce two adversarial frameworks: (i) multi-discriminator, which explicitly aligns each target domain to its counterparts, and (ii) multi-target knowledge transfer, which learns a target-agnostic model thanks to a multi-teacher/single-student distillation mechanism.The evaluation is done on four newly-proposed multi-target benchmarks for UDA in semantic segmentation. In all tested scenarios, our approaches consistently outperform baselines, setting competitive standards for the novel task.

Preparation

Pre-requisites

  • Python 3.7
  • Pytorch >= 0.4.1
  • CUDA 9.0 or higher

Installation

  1. Clone the repo:
$ git clone https://github.com/valeoai/MTAF
$ cd MTAF
  1. Install OpenCV if you don't already have it:
$ conda install -c menpo opencv
  1. Install NVIDIA Apex if you don't already have it: follow the instructions on: https://github.com/NVIDIA/apex

  2. Install this repository and the dependencies using pip:

$ pip install -e <root_dir>

With this, you can edit the MTAF code on the fly and import function and classes of MTAF in other project as well.

  1. Optional. To uninstall this package, run:
$ pip uninstall MTAF

Datasets

By default, the datasets are put in <root_dir>/data. We use symlinks to hook the MTAF codebase to the datasets. An alternative option is to explicitlly specify the parameters DATA_DIRECTORY_SOURCE and DATA_DIRECTORY_TARGET in YML configuration files.

  • GTA5: Please follow the instructions here to download images and semantic segmentation annotations. The GTA5 dataset directory should have this basic structure:
<root_dir>/data/GTA5/                               % GTA dataset root
<root_dir>/data/GTA5/images/                        % GTA images
<root_dir>/data/GTA5/labels/                        % Semantic segmentation labels
...
  • Cityscapes: Please follow the instructions in Cityscape to download the images and ground-truths. The Cityscapes dataset directory should have this basic structure:
<root_dir>/data/cityscapes/                         % Cityscapes dataset root
<root_dir>/data/cityscapes/leftImg8bit              % Cityscapes images
<root_dir>/data/cityscapes/leftImg8bit/train
<root_dir>/data/cityscapes/leftImg8bit/val
<root_dir>/data/cityscapes/gtFine                   % Semantic segmentation labels
<root_dir>/data/cityscapes/gtFine/train
<root_dir>/data/cityscapes/gtFine/val
...
  • Mapillary: Please follow the instructions in Mapillary Vistas to download the images and validation ground-truths. The Mapillary Vistas dataset directory should have this basic structure:
<root_dir>/data/mapillary/                          % Mapillary dataset root
<root_dir>/data/mapillary/train                     % Mapillary train set
<root_dir>/data/mapillary/train/images
<root_dir>/data/mapillary/validation                % Mapillary validation set
<root_dir>/data/mapillary/validation/images
<root_dir>/data/mapillary/validation/labels
...
  • IDD: Please follow the instructions in IDD to download the images and validation ground-truths. The IDD Segmentation dataset directory should have this basic structure:
<root_dir>/data/IDD/                         % IDD dataset root
<root_dir>/data/IDD/leftImg8bit              % IDD images
<root_dir>/data/IDD/leftImg8bit/train
<root_dir>/data/IDD/leftImg8bit/val
<root_dir>/data/IDD/gtFine                   % Semantic segmentation labels
<root_dir>/data/IDD/gtFine/val
...

Pre-trained models

Pre-trained models can be downloaded here and put in <root_dir>/pretrained_models

Running the code

For evaluation, execute:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_baseline_pretrained.yml
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mdis_pretrained.yml
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mtkt_pretrained.yml

Training

For the experiments done in the paper, we used pytorch 1.3.1 and CUDA 10.0. To ensure reproduction, the random seed has been fixed in the code. Still, you may need to train a few times to reach the comparable performance.

By default, logs and snapshots are stored in <root_dir>/experiments with this structure:

<root_dir>/experiments/logs
<root_dir>/experiments/snapshots

To train the multi-target baseline:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_baseline.yml

To train the Multi-Discriminator framework:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_mdis.yml

To train the Multi-Target Knowledge Transfer framework:

$ cd <root_dir>/mtaf/scripts
$ python train.py --cfg ./configs/gta2cityscapes_mapillary_mtkt.yml

Testing

To test the multi-target baseline:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_baseline.yml

To test the Multi-Discriminator framework:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mdis.yml

To test the Multi-Target Knowledge Transfer framework:

$ cd <root_dir>/mtaf/scripts
$ python test.py --cfg ./configs/gta2cityscapes_mapillary_mtkt.yml

Acknowledgements

This codebase is heavily borrowed from ADVENT.

License

MTAF is released under the Apache 2.0 license.

Comments
  • question about adversarial training code in train_UDA.py

    question about adversarial training code in train_UDA.py

    Thank you for sharing the code for your excellent work. I have some basic questions about your implementation. pred_trg_main = interp_target(all_pred_trg_main[i+1]) ## what does [i+1] mean? pred_trg_main_list.append(pred_trg_main) pred_trg_target = interp_target(all_pred_trg_main[0]) ## what does [0] mean? pred_trg_target_list.append(pred_trg_target)

    In train_UDA.py, line 829-836, why should we use index[i+1] and [0]? What's the meaning of that? Also, where is the definition of the target-agnostic classifier in your code?

    Thanks again and look forward to hearing back from you!

    opened by yuzhang03 2
  • the problem for training loss

    the problem for training loss

    Thanks for enlightening work agian.

    I train the Mdis method for one source and one target, but I am confused for the loss, and I plot by tensorboard. And as I think, the adv loss should walk low and the discrimitor loss should walk higher. but in the loss below, the two losses oscillate around a number. whats wrong with it?

    Besides, I infer the training results should be better when training in manner of 1source 1target instead of 1source multi target. But in my training, I dont get good results.

    So hope your thought sincerely.

    And my training config: adv loss weight: 0.5 adv learning rate: 1e-5 seg learning rate: 1.25e-5

    adversarial loss of one source and one target
    image

    dicriminator loss of one source and one target image

    opened by slz929 2
  • problem for training data

    problem for training data

    Thanks for enlightening and practical work about multi-target DA ! I have read your paper, and I found one source dataset and 3 target datasets of unequal quantity, does the quantity of data for every domain matters? And what is the appropriate amount of training data for MTKT? Another question, I want to know why KL loss is used for knowledge transfer? If I want to train an embedding word instead of a segmentation map, is the KL loss appropriate, and is there a better alternative?

    opened by slz929 2
  • About the generation of segmentation color maps

    About the generation of segmentation color maps

    Thanks for the great research!

    I have a question though, the mIoU you report in your paper is for 7 classes, but the segmentation colour map in the qualitative analysis seems to be for the 19 classes commonly used in domain adaptive semantic segmentation.

    In other words, how can a model trained on 7 classes be used to generate a 19-class segmentation colour map? Or am I wrong in my understanding?

    I look forward to your response.

    Thank you!

    opened by liwei1101 1
  • About labels of IDD dataset

    About labels of IDD dataset

    Hello! @SportaXD Thank you for your great work!

    I was reproducing the code and noticed: the labels in the IDD dataset are in JSON file format instead of segmentation label form.

    How is this problem solved?

    opened by liwei1101 1
  • About MTKT code

    About MTKT code

    In train_UDA.py 758 line

            d_main_list[i] = d_main
            optimizer_d_main_list.append(optimizer_d_main)
            d_aux_list[i] = d_aux
            optimizer_d_aux_list.append(optimizer_d_aux)
    

    If this were done(d_main_list[i] = d_main and d_aux_list[i] = d_aux), it would make all the discriminators in the list use the same one, shouldn't there be one discriminator for each classifier?

    opened by liwei1101 1
  • About 'the multi-target baseline'

    About 'the multi-target baseline'

    Thank you for sharing the code for your excellent work. I have some basic questions about your implementation.

    d_main = get_fc_discriminator(num_classes=num_classes)
    d_main.train()
    d_main.to(device)
    d_aux = get_fc_discriminator(num_classes=num_classes)
    d_aux.train()
    d_aux.to(device)
    

    Can you tell me why the multi-domain baseline code does not use multiple discriminators but only one discriminator. It looks like a single domain approach. Thanks!

    opened by liwei1101 1
  • about eval_UDA.py

    about eval_UDA.py

    Thanks for sharing your codes.

    I was impressed with your good research.

    Could you explain why the output map is not resized for target size(cfg.TEST.OUTPUT_SIZE_TARGET) in the case of Mapillary dataset in line 57 of eval_UDA.py?

    When I tested the trained model on Mapillary dataset, inference took a long time due to the large resolution.

    I'm looking forward to hearing from you.

    Thank you!

    opened by jdg900 1
  • modifying info7class.json and train_UDA.py

    modifying info7class.json and train_UDA.py

    we have found a small bug in "./MTAF/mtaf/dataset/cityscapes_list/info7class.json". valeo

    It should be 7 Classes rather than 19 Classes in the configuration file. It appears in the Evaluation stage, where the result is printed out in the mIoU evaluation metrics and the names of the 7 classes.

    Also, there is a typo in the comments.

    opened by mohamedelmesawy 1
  • Running MTAF on a slightly different setup

    Running MTAF on a slightly different setup

    Hello, thanks for sharing the code and such a good contribution. I would like to run your method on a setup that is a bit different, specifically adapting from Cityscapes ---> BDD, Mapillary. I have seen that the code accepts Cityscapes for both source and target, so that shouldnt be a problem, and I have added a dataloader for BDD to be the target 1.

    In order to get the best performance, do I need to train the baseline and then train the method using MTKT or MDIS loading the baseline as pretrained? Or do I get the best performance directly by running the training script for MTKT or MDIS without the baseline?

    opened by fabriziojpiva 1
Owner
Valeo.ai
The GitHub account of Valeo.ai
Valeo.ai
OCR Streamlit App is used to extract text from images using python's easyocr, pytorch and streamlit packages

OCR-Streamlit-App OCR Streamlit App is used to extract text from images using python's easyocr, pytorch and streamlit packages OCR app gets an image a

Siva Prakash 5 Apr 05, 2022
MPI Interest Group on Algorithms on 1st semester 2021

MPI Algorithms Interest Group Introduction Lecturer: Steve Yan Location: TBA Time Schedule: TBA Semester: 1 Useful URLs Typora: https://typora.io Goog

Ex10si0n 13 Sep 08, 2022
Implementation of ECCV20 paper: the devil is in classification: a simple framework for long-tail object detection and instance segmentation

Implementation of our ECCV 2020 paper The Devil is in Classification: A Simple Framework for Long-tail Instance Segmentation This repo contains code o

twang 98 Sep 17, 2022
Boostcamp AI Tech 3rd / Basic Paper reading w.r.t Embedding

Boostcamp AI Tech 3rd : Basic Paper Reading w.r.t Embedding TL;DR 1992년부터 2018년도까지 이루어진 word/sentence embedding의 중요한 줄기를 이루는 기초 논문 스터디를 진행하고자 합니다. 논

Soyeon Kim 14 Nov 14, 2022
Speech Recognition using DeepSpeech2.

deepspeech.pytorch Implementation of DeepSpeech2 for PyTorch using PyTorch Lightning. The repo supports training/testing and inference using the DeepS

Sean Naren 2k Jan 04, 2023
🏃‍♀️ A curated list about human motion capture, analysis and synthesis.

Awesome Human Motion 🏃‍♀️ A curated list about human motion capture, analysis and synthesis. Contents Introduction Human Models Datasets Data Process

Dennis Wittchen 274 Dec 14, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021
Autonomous Robots Kalman Filters

Autonomous Robots Kalman Filters The Kalman Filter is an easy topic. However, ma

20 Jul 18, 2022
Joint parameterization and fitting of stroke clusters

StrokeStrip: Joint Parameterization and Fitting of Stroke Clusters Dave Pagurek van Mossel1, Chenxi Liu1, Nicholas Vining1,2, Mikhail Bessmeltsev3, Al

Dave Pagurek 44 Dec 01, 2022
This is the repo for Uncertainty Quantification 360 Toolkit.

UQ360 The Uncertainty Quantification 360 (UQ360) toolkit is an open-source Python package that provides a diverse set of algorithms to quantify uncert

International Business Machines 207 Dec 30, 2022
Light-Head R-CNN

Light-head R-CNN Introduction We release code for Light-Head R-CNN. This is my best practice for my research. This repo is organized as follows: light

jemmy li 835 Dec 06, 2022
A JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short.

BraVe This is a JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short. The model provided in this package wa

DeepMind 44 Nov 20, 2022
Shallow Convolutional Neural Networks for Human Activity Recognition using Wearable Sensors

-IEEE-TIM-2021-1-Shallow-CNN-for-HAR [IEEE TIM 2021-1] Shallow Convolutional Neural Networks for Human Activity Recognition using Wearable Sensors All

Wenbo Huang 1 May 17, 2022
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machen 11 Nov 27, 2022
The codebase for Data-driven general-purpose voice activity detection.

Data driven GPVAD Repository for the work in TASLP 2021 Voice activity detection in the wild: A data-driven approach using teacher-student training. S

Heinrich Dinkel 75 Nov 27, 2022
hipCaffe: the HIP port of Caffe

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Cent

ROCm Software Platform 126 Dec 05, 2022
git《Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction》(ECCV 2020) GitHub:

Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction Code for the ECCV 2020 paper by Yiming Qian and Yasutaka Furukawa Getting

37 Dec 04, 2022
Rafael Project- Classifying rockets to different types using data science algorithms.

Rocket-Classify Rafael Project- Classifying rockets to different types using data science algorithms. In this project we received data base with data

Hadassah Engel 5 Sep 18, 2021
RATCHET is a Medical Transformer for Chest X-ray Diagnosis and Reporting

RATCHET: RAdiological Text Captioning for Human Examined Thoraxes RATCHET is a Medical Transformer for Chest X-ray Diagnosis and Reporting. Based on t

26 Nov 14, 2022
Resco: A simple python package that report the effect of deep residual learning

resco Description resco is a simple python package that report the effect of dee

Pierre-Arthur Claudé 1 Jun 28, 2022