Official PyTorch implementation of "Dual Path Learning for Domain Adaptation of Semantic Segmentation".

Related tags

Text Data & NLPDPL
Overview

Dual Path Learning for Domain Adaptation of Semantic Segmentation

Official PyTorch implementation of "Dual Path Learning for Domain Adaptation of Semantic Segmentation".

Accepted by ICCV 2021. Paper

Requirements

  • Pytorch 3.6
  • torch==1.5
  • torchvision==0.6
  • Pillow==7.1.2

Dataset Preparations

For GTA5->Cityscapes scenario, download:

For further evaluation on SYNTHIA->Cityscapes scenario, download:

The folder should be structured as:

|DPL
|—— DPL_master/
|—— CycleGAN_DPL/
|—— data/
│   ├—— Cityscapes/  
|   |   ├—— data/
|   |       ├—— gtFine/
|   |       ├—— leftImg8bit/
│   ├—— GTA5/
|   |   ├—— images/
|   |   ├—— labels/
|   |   ├—— ...
│   ├—— synthia/ 
|   |   ├—— RGB/
|   |   ├—— GT/
|   |   ├—— Depth/
|   |   ├—— ...

Evaluation

Download pre-trained models from Pretrained_Resnet_GTA5 [Google_Drive, BaiduYun(Code:t7t8)] and save the unzipped models in ./DPL_master/DPL_pretrained, download translated target images from DPI2I_City2GTA_Resnet [Google_Drive, BaiduYun(Code:cf5a)] and save the unzipped images in ./DPL_master/DPI2I_images/DPI2I_City2GTA_Resnet/val. Then you can evaluate DPL and DPL-Dual as following:

  • Evaluation of DPL
    cd DPL_master
    python evaluation.py --init-weights ./DPL_pretrained/Resnet_GTA5_DPLst4_T.pth --save path_to_DPL_results/results --log-dir path_to_DPL_results
    
  • Evaluation of DPL-Dual
    python evaluation_DPL.py --data-dir-targetB ./DPI2I_images/DPI2I_City2GTA_Resnet --init-weights_S ./DPL_pretrained/Resnet_GTA5_DPLst4_S.pth --init-weights_T ./DPL_pretrained/Resnet_GTA5_DPLst4_T.pth --save path_to_DPL_dual_results/results --log-dir path_to_DPL_dual_results
    

More pretrained models and translated target images on other settings can be downloaded from:

Training

The training process of DPL consists of two phases: single-path warm-up and DPL training. The training example is given on default setting: GTA5->Cityscapes, DeepLab-V2 with ResNet-101.

Quick start for DPL training

Downlad pretrained 1 and 1 [Google_Drive, BaiduYun(Code: 3ndm)], save 1 to path_to_model_S, save 1 to path_to_model_T, then you can train DPL as following:

  1. Train dual path image generation module.

    cd ../CycleGAN_DPL
    python train.py --dataroot ../data --name dual_path_I2I --A_setroot GTA5/images --B_setroot Cityscapes/leftImg8bit/train --model cycle_diff --lambda_semantic 1 --init_weights_S path_to_model_S --init_weights_T path_to_model_T
    
  2. Generate transferred images with dual path image generation module.

    • Generate transferred GTA5->Cityscapes images.
    python test.py --name dual_path_I2I --no_dropout --load_size 1024 --crop_size 1024 --preprocess scale_width --dataroot ../data/GTA5/images --model_suffix A  --results_dir DPI2I_path_to_GTA52cityscapes
    
    • Generate transferred Cityscapes->GTA5 images.
     python test.py --name dual_path_I2I --no_dropout --load_size 1024 --crop_size 1024 --preprocess scale_width --dataroot ../data/Cityscapes/leftImg8bit/train --model_suffix B  --results_dir DPI2I_path_to_cityscapes2GTA5/train
     
     python test.py --name dual_path_I2I --no_dropout --load_size 1024 --crop_size 1024 --preprocess scale_width --dataroot ../data/Cityscapes/leftImg8bit/val --model_suffix B  --results_dir DPI2I_path_to_cityscapes2GTA5/val
    
  3. Train dual path adaptive segmentation module

    3.1. Generate dual path pseudo label.

    cd ../DPL_master
    python DP_SSL.py --save path_to_dual_pseudo_label_stepi --init-weights_S path_to_model_S --init-weights_T path_to_model_T --thresh 0.9 --threshlen 0.3 --data-list-target ./dataset/cityscapes_list/train.txt --set train --data-dir-targetB DPI2I_path_to_cityscapes2GTA5 --alpha 0.5
    

    3.2. Train 1 and 1 with dual path pseudo label respectively.

    python DPL.py --snapshot-dir snapshots/DPL_modelS_step_i --data-dir-target DPI2I_path_to_cityscapes2GTA5 --data-label-folder-target path_to_dual_pseudo_label_stepi --init-weights path_to_model_S --domain S
    
    python DPL.py --snapshot-dir snapshots/DPL_modelT_step_i --data-dir DPI2I_path_to_GTA52cityscapes --data-label-folder-target path_to_dual_pseudo_label_stepi --init-weights path_to_model_T
    

    3.3. Update path_to_model_Swith path to best 1 model, update path_to_model_Twith path to best 1 model, adjust parameter threshenlen to 0.25, then repeat 3.1-3.2 for 3 more rounds.

Single path warm up

If you want to train DPL from the very begining, training example of single path warm up is also provided as below:

Single Path Warm-up

Download 1 trained with labeled source dataset Source_only [Google_Drive, BaiduYun(Code:fjdw)].

  1. Train original cycleGAN (without Dual Path Image Translation).

    cd CycleGAN_DPL
    python train.py --dataroot ../data --name ori_cycle --A_setroot GTA5/images --B_setroot Cityscapes/leftImg8bit/train --model cycle_diff --lambda_semantic 0
    
  2. Generate transferred GTA5->Cityscapes images with original cycleGAN.

    python test.py --name ori_cycle --no_dropout --load_size 1024 --crop_size 1024 --preprocess scale_width --dataroot ../data/GTA5/images --model_suffix A  --results_dir path_to_ori_cycle_GTA52cityscapes
    
  3. Before warm up, pretrain 1 without SSL and restore the best checkpoint in path_to_pretrained_T:

    cd ../DPL_master
    python DPL.py --snapshot-dir snapshots/pretrain_T --init-weights path_to_initialization_S --data-dir path_to_ori_cycle_GTA52cityscapes
    
  4. Warm up 1.

    4.1. Generate labels on source dataset with label correction.

    python SSL_source.py --set train --data-dir path_to_ori_cycle_GTA52cityscapes --init-weights path_to_pretrained_T --threshdelta 0.3 --thresh 0.9 --threshlen 0.65 --save path_to_corrected_label_step1_or_step2 
    

    4.2. Generate pseudo labels on target dataset.

    python SSL.py --set train --data-list-target ./dataset/cityscapes_list/train.txt --init-weights path_to_pretrained_T  --thresh 0.9 --threshlen 0.65 --save path_to_pseudo_label_step1_or_step2 
    

    4.3. Train 1 with label correction.

    python DPL.py --snapshot-dir snapshots/label_corr_step1_or_step2 --data-dir path_to_ori_cycle_GTA52cityscapes --source-ssl True --source-label-dir path_to_corrected_label_step1_or_step2 --data-label-folder-target path_to_pseudo_label_step1_or_step2 --init-weights path_to_pretrained_T          
    

4.4 Update path_to_pretrained_T with path to best model in 4.3, repeat 4.1-4.3 for one more round.

More Experiments

  • For SYNTHIA to Cityscapes scenario, please train DPL with "--source synthia" and change the data path.
  • For training on "FCN-8s with VGG16", please train DPL with "--model VGG".

Citation

If you find our paper and code useful in your research, please consider giving a star and citation.

@inproceedings{cheng2021dual,
  title={Dual Path Learning for Domain Adaptation of Semantic Segmentation},
  author={Cheng, Yiting and Wei, Fangyun and Bao, Jianmin and Chen, Dong and Wen, Fang and Zhang, Wenqiang},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={9082--9091},
  year={2021}
}

Acknowledgment

This code is heavily borrowed from BDL.

Creating a Feed of MISP Events from ThreatFox (by abuse.ch)

ThreatFox2Misp Creating a Feed of MISP Events from ThreatFox (by abuse.ch) What will it do? This will fetch IOCs from ThreatFox by Abuse.ch, convert t

17 Nov 22, 2022
Summarization module based on KoBART

KoBART-summarization Install KoBART pip install git+https://github.com/SKT-AI/KoBART#egg=kobart Requirements pytorch==1.7.0 transformers==4.0.0 pytor

seujung hwan, Jung 148 Dec 28, 2022
A Multi-modal Model Chinese Spell Checker Released on ACL2021.

ReaLiSe ReaLiSe is a multi-modal Chinese spell checking model. This the office code for the paper Read, Listen, and See: Leveraging Multimodal Informa

DaDa 106 Dec 29, 2022
2021 2학기 데이터크롤링 기말프로젝트

공지 주제 웹 크롤링을 이용한 취업 공고 스케줄러 스케줄 주제 정하기 코딩하기 핵심 코드 설명 + 피피티 구조 구상 // 12/4 토 피피티 + 스크립트(대본) 제작 + 녹화 // ~ 12/10 ~ 12/11 금~토 영상 편집 // ~12/11 토 웹크롤러 사람인_평균

Choi Eun Jeong 2 Aug 16, 2022
189 Jan 02, 2023
A linter to manage all your python exceptions and try/except blocks (limited only for those who like dinosaurs).

Manage your exceptions in Python like a PRO Currently in BETA. Inspired by this blog post. I shared the building process of this tool here. “For those

Guilherme Latrova 353 Dec 31, 2022
New Modeling The Background CodeBase

Modeling the Background for Incremental Learning in Semantic Segmentation This is the updated official PyTorch implementation of our work: "Modeling t

Fabio Cermelli 9 Dec 28, 2022
PyTorch implementation of NATSpeech: A Non-Autoregressive Text-to-Speech Framework

A Non-Autoregressive Text-to-Speech (NAR-TTS) framework, including official PyTorch implementation of PortaSpeech (NeurIPS 2021) and DiffSpeech (AAAI 2022)

760 Jan 03, 2023
Sorce code and datasets for "K-BERT: Enabling Language Representation with Knowledge Graph",

K-BERT Sorce code and datasets for "K-BERT: Enabling Language Representation with Knowledge Graph", which is implemented based on the UER framework. R

Weijie Liu 834 Jan 09, 2023
A python package for deep multilingual punctuation prediction.

This python library predicts the punctuation of English, Italian, French and German texts. We developed it to restore the punctuation of transcribed spoken language.

Oliver Guhr 27 Dec 22, 2022
Graph Coloring - Weighted Vertex Coloring Problem

Graph Coloring - Weighted Vertex Coloring Problem This project proposes several local searches and an MCTS algorithm for the weighted vertex coloring

Cyril 1 Jul 08, 2022
This repository contains Python scripts for extracting linguistic features from Filipino texts.

Filipino Text Linguistic Feature Extractors This repository contains scripts for extracting linguistic features from Filipino texts. The scripts were

Joseph Imperial 1 Oct 05, 2021
Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)

Linear Multihead Attention (Linformer) PyTorch Implementation of reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer:

Kui Xu 58 Dec 23, 2022
小布助手对话短文本语义匹配的一个baseline

oppo-text-match 小布助手对话短文本语义匹配的一个baseline 模型 参考:https://kexue.fm/archives/8213 base版本线下大概0.952,线上0.866(单模型,没做K-flod融合)。 训练 测试环境:tensorflow 1.15 + keras

苏剑林(Jianlin Su) 132 Dec 14, 2022
Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing

Introduction Funnel-Transformer is a new self-attention model that gradually compresses the sequence of hidden states to a shorter one and hence reduc

GUOKUN LAI 197 Dec 11, 2022
Deep Learning for Natural Language Processing - Lectures 2021

This repository contains slides for the course "20-00-0947: Deep Learning for Natural Language Processing" (Technical University of Darmstadt, Summer term 2021).

0 Feb 21, 2022
Simple GUI where you can enter an article and get a crisp summarized version.

Text-Summarization-using-TextRank-BART Simple GUI where you can enter an article and get a crisp summarized version. How to run: Clone the repo Instal

Rohit P 4 Sep 28, 2022
This repository contains all the source code that is needed for the project : An Efficient Pipeline For Bloom’s Taxonomy Using Natural Language Processing and Deep Learning

Pipeline For NLP with Bloom's Taxonomy Using Improved Question Classification and Question Generation using Deep Learning This repository contains all

Rohan Mathur 9 Jul 17, 2021
Scikit-learn style model finetuning for NLP

Scikit-learn style model finetuning for NLP Finetune is a library that allows users to leverage state-of-the-art pretrained NLP models for a wide vari

indico 665 Dec 17, 2022