Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

Related tags

Deep Learningcorda
Overview

CorDA

Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation alt text

Prerequisite

Please create and activate the following conda envrionment

# It may take several minutes for conda to solve the environment
conda env create -f environment.yml
conda activate corda 

Code was tested on a V100 with 16G Memory.

Train a CorDA model

# Train for the SYNTHIA2Cityscapes task
bash run_synthia_stereo.sh
# Train for the GTA2Cityscapes task
bash run_gta.sh

Test the trained model

bash shells/eval_syn2city.sh
bash shells/eval_gta2city.sh

Pre-trained models are provided (Google Drive). Please put them in ./checkpoint.

  • The provided SYNTHIA2Cityscapes model achieves 56.3 mIoU (16 classes) at the end of the training.
  • The provided GTA2Cityscapes model achieves 57.7 mIoU (19 classes) at the end of the training.

Reported Results on SYNTHIA2Cityscapes

Method mIoU*(13) mIoU(16)
CBST 48.9 42.6
FDA 52.5 -
DADA 49.8 42.6
DACS 54.8 48.3
CorDA 62.8 55.0

Citation

Please cite our work if you find it useful.

@article{wang2021domain,
  title={Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation},
  author={Wang, Qin and Dai, Dengxin and Hoyer, Lukas and Fink, Olga and Van Gool, Luc},
  journal={arXiv preprint arXiv:2104.13613},
  year={2021}
}

Acknowledgement

  • DACS is used as our codebase and our DA baseline official
  • SFSU as the source of stereo Cityscapes depth estimation Official

Data links

For questions regarding the code, please contact [email protected] .

Comments
  • Training on a custom dataset without ground truth label

    Training on a custom dataset without ground truth label

    From what I understand after reading your paper, you do not need ground truth label data on the target domain to train the pseudo labels. However, when I look at cityscapes_loader, it seems I need to supply the ground truth seg maps as well.

    I am trying to train the network on a custom dataset (that only depth maps, and ground truth seg map only on the source domain), but it looks I cannot get away without providing it. Do you have any thoughts on this?

    opened by chophilip21 6
  • Coufusion about the 'depth' of cityscapes

    Coufusion about the 'depth' of cityscapes

    Hello, nice work but i meet some question.

    in 'data/cityscapes_loader.py' line 181-183:

    depth = cv2.imread(depth_path, flags=cv2.IMREAD_ANYDEPTH).astype(np.float32) / 256. + 1. if depth.shape != lbl.shape: depth = cv2.resize(depth, lbl.shape[::-1], interpolation=cv2.INTER_NEAREST) Monocular depth: in disparity form 0 - 65535

    (1) Why the depth is calculated from x/256+1
    (2) is it the depth or the disparity ? In the official doc of cityscapes, it say disparity = (x-1)/256

    Thank you!

    opened by ganyz 6
  • gta2city

    gta2city

    When I revisited the performance of your GTA2City, I found that the MIOU could only reach about 54.8 after 250,000 iterations. I didn't change anything except the 10.2 version of CUDA. Could you please provide the training log of your GTA2City? Thanks a lot!!

    opened by xiaoachen98 6
  • Question about the pretrained parameters of backbone

    Question about the pretrained parameters of backbone

    Thanks for sharing the code, and it brings the amazing improvement for this filed.

    I notice that you have used backbone with parameters pretrained on MSCOCO which is the same with DACS, and have you tried backbone pretrained on ImageNet? If yes, could you please provide the corresponding results?

    opened by super233 4
  • About intrinsics used in GTA depth estimation

    About intrinsics used in GTA depth estimation

    Thanks a lot for your fantastic work. When I followed your depth estimation mentioned in issue#7, I went to the https://playing-for-benchmarks.org. However,its camera calibration doesn't include intrinsic matrix directly, which is needed in Monodepth2 depth estimation. Would you kindly share the intrinsic of GTA you used in depth estimation? Or may I know a way to convert GTA's projection matrix to intrinsic matrix?

    opened by Ichinose0code 2
  • Why does the class Train have 0 mIoU, What may could happen

    Why does the class Train have 0 mIoU, What may could happen

    I download your pretrained model, and start demo But I find train iou 0.0

    (yy_corda) [email protected]:/media/ailab/data/yy/corda$ bash shells/eval_gta2city.sh
    ./checkpoint/gta
    Found 500 val images
    Evaluating, found 500 batches.
    100 processed
    200 processed
    300 processed
    400 processed
    500 processed
    class  0 road         IU 94.81
    class  1 sidewalk     IU 62.18
    class  2 building     IU 88.03
    class  3 wall         IU 33.09
    class  4 fence        IU 43.51
    class  5 pole         IU 39.93
    class  6 traffic_light IU 49.46
    class  7 traffic_sign IU 54.68
    class  8 vegetation   IU 88.01
    class  9 terrain      IU 47.67
    class 10 sky          IU 89.22
    class 11 person       IU 68.22
    class 12 rider        IU 39.21
    class 13 car          IU 90.25
    class 14 truck        IU 51.43
    class 15 bus          IU 58.37
    class 16 train        IU 0.00
    class 17 motorcycle   IU 40.38
    class 18 bicycle      IU 57.42
    meanIOU: 0.5767768805758403
    

    I train my model on it, and test eval_syn2city.py. Here are 3 classes Iou 0.0 because missing classed in source domain. but I download pretrained model ,and run eval_gta2city.sh still miss one class train. So, I want to know why. Is it may train class didn't appear city datasets? So it's IOU is 0.

    opened by yuheyuan 2
  • How to obtain your depth datasets?

    How to obtain your depth datasets?

    Hi, thanks for your great work!

    It would be great if you can elaborate more on how you obtain the monocular depth estimation.

    I understand that you've uploaded the dataset, but it would be really helpful if I know exactly how you've done it.

    From your paper, in the ablation study part: "We would like to highlight that for both stereo and monocular depth estimations, only stereo pairs or image sequences from the same dataset are used to train and generate the pseudo depth estimation model. As no data from external datasets is used, and stereo pairs and image sequences are relatively easy to obtain, our proposal of using self-supervised depth have the potential to be effectively realized in real-world applications."

    So I image you get your monocular depth pseudo ground truth by:

    1. Downloading target domain videos (here Cityscapes. Btw, where do you get Cityscapes videos?)
    2. Train a Monodepth2 model on those videos (for how long?)
    3. Use the model to get pseudo ground truth Then repeat to the source domain (GTA 5 or Synthia)

    Am I getting it right? And is there any more important points you want to highlight when calculating such depth labels?

    Regards, Tu

    opened by tudragon154203 2
  • How to continue train?

    How to continue train?

    when I use script llike

    CUDA_VISIBLE_DEVICES=0 python3 -u trainUDA_gta.py --config ./configs/configUDA_gta2city.json --name UDA-gta --resume /saved/DeepLabv2-depth-gtamono-cityscapestereo/05-03_02-13-UDA-gta/checkpoint-iter95000.pth | tee ./gta-corda.log

    It would run again but the new checkpoint would be saved.

    opened by ygjwd12345 2
  • Warning:optimizer contains a parameter group with duplicate parameters

    Warning:optimizer contains a parameter group with duplicate parameters

    I follow you code, and train a model. But it results may not meet the need.

    I eval the model you share .

    bash shells/eval_syn2city.sh
    

    your share model. in syn2city : 19 classes : meanIout: 0.4771 I train the model: in syn2city : 19 classes : meanIout: only: 0.46.7

    in the train. I find the warning, So I want to know if it may cause the result drop.

    /home/ailab/anaconda3/envs/yy_CORDA/lib/python3.7/site-packages/torch/optim/sgd.py:68: UserWarning: optimizer contains a parameter group with duplicate parameters; in future, this will cause an error; see github.com/pytorch/pytorch/issues/40967 for more information
      super(SGD, self).__init__(params, defaults)
    D_init tensor(134.8489, device='cuda:0', grad_fn=<DivBackward0>) D tensor(134.5171, device='cuda:0', grad_fn=<DivBackward0>)
    
    opened by yuheyuan 1
  • May deeplabv2_synthia.py have extra space symbol

    May deeplabv2_synthia.py have extra space symbol

    if the forward code, return out ,an extra space symbol

       def forward(self, x):
            out = self.conv2d_list[0](x)
            for i in range(len(self.conv2d_list)-1):
                out += self.conv2d_list[i+1](x)
                return out
    

    this is the code in your code

    class Classifier_Module(nn.Module):
    
        def __init__(self, dilation_series, padding_series, num_classes):
            super(Classifier_Module, self).__init__()
            self.conv2d_list = nn.ModuleList()
            for dilation, padding in zip(dilation_series, padding_series):
                self.conv2d_list.append(nn.Conv2d(256, num_classes, kernel_size=3, stride=1, padding=padding, dilation=dilation, bias = True))
    
            for m in self.conv2d_list:
                m.weight.data.normal_(0, 0.01)
    
        def forward(self, x):
            out = self.conv2d_list[0](x)
            for i in range(len(self.conv2d_list)-1):
                out += self.conv2d_list[i+1](x)
                return out
    
    

    I this this forward is possible.beaceuse your code, use list contain four elements,if return out have space, this may do only twice without fourth

       def forward(self, x):
            out = self.conv2d_list[0](x)
            for i in range(len(self.conv2d_list)-1):
                out += self.conv2d_list[i+1](x)
            return out
    
       self.__make_pred_layer(Classifier_Module,[6,12,18,24],[6, 12,18, 24],NUM_OUTPUT[task]
    
       def _make_pred_layer(self,block, dilation_series, padding_series,num_classes):
            return block(dilation_series,padding_series,num_classes)
    
    opened by yuheyuan 1
  • checkpoints links fail

    checkpoints links fail

    I can't download the checkpoints file from your links, when click into the google drive, The file size is shown to be 2GB, but it was only 0B when downloaded

    opened by xiaoachen98 0
Owner
Qin Wang
PhD student @ ETH Zürich
Qin Wang
[ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang

Self-Damaging Contrastive Learning Introduction The recent breakthrough achieved by contrastive learning accelerates the pace for deploying unsupervis

VITA 51 Dec 29, 2022
A model that attempts to learn and benefit from data collected on card counting.

A model that attempts to learn and benefit from data collected on card counting. A decision tree like model is built to win more often than loose and increase the bet of the player appropriately to c

1 Dec 17, 2021
PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders: A PyTorch Implementation This is a PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners: @

Meta Research 4.8k Jan 04, 2023
Pytorch implementation of DeePSiM

Pytorch implementation of DeePSiM

1 Nov 05, 2021
A complete speech segmentation system using Kaldi and x-vectors for voice activity detection (VAD) and speaker diarisation.

bbc-speech-segmenter: Voice Activity Detection & Speaker Diarization A complete speech segmentation system using Kaldi and x-vectors for voice activit

BBC 16 Oct 27, 2022
A python library to build Model Trees with Linear Models at the leaves.

A python library to build Model Trees with Linear Models at the leaves.

Marco Cerliani 212 Dec 30, 2022
Generative Adversarial Networks for High Energy Physics extended to a multi-layer calorimeter simulation

CaloGAN Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks. This repository c

Deep Learning for HEP 101 Nov 13, 2022
.NET bindings for the Pytorch engine

TorchSharp TorchSharp is a .NET library that provides access to the library that powers PyTorch. It is a work in progress, but already provides a .NET

Matteo Interlandi 17 Aug 30, 2021
This repo provides the source code & data of our paper "GreaseLM: Graph REASoning Enhanced Language Models"

GreaseLM: Graph REASoning Enhanced Language Models This repo provides the source code & data of our paper "GreaseLM: Graph REASoning Enhanced Language

137 Jan 02, 2023
Emotion classification of online comments based on RNN

emotion_classification Emotion classification of online comments based on RNN, the accuracy of the model in the test set reaches 99% data: Large Movie

1 Nov 23, 2021
Structure Information is the Key: Self-Attention RoI Feature Extractor in 3D Object Detection

Structure Information is the Key: Self-Attention RoI Feature Extractor in 3D Object Detection abstract:Unlike 2D object detection where all RoI featur

DK. Zhang 2 Oct 07, 2022
This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021.

inverse_attention This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021. Le

Firas Laakom 5 Jul 08, 2022
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)

Contrastive Unpaired Translation (CUT) video (1m) | video (10m) | website | paper We provide our PyTorch implementation of unpaired image-to-image tra

1.7k Dec 27, 2022
All materials of Cassandra Event, Udyam'22

Cassandra 2022 Workspace Workshop Materials Workshop-1 Workshop-2 Workshop-3 Workshop-4 Assignments Assignment-1 Assignment-2 Assignment-3 Resources P

36 Dec 31, 2022
Scalable machine learning based time series forecasting

mlforecast Scalable machine learning based time series forecasting. Install PyPI pip install mlforecast Optional dependencies If you want more functio

Nixtla 145 Dec 24, 2022
Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot. Graph Convolutional Networks for Hyperspectral Image Classification, IEEE TGRS, 2021.

Graph Convolutional Networks for Hyperspectral Image Classification Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot T

Danfeng Hong 154 Dec 13, 2022
Graph-based community clustering approach to extract protein domains from a predicted aligned error matrix

Using a predicted aligned error matrix corresponding to an AlphaFold2 model , returns a series of lists of residue indices, where each list corresponds to a set of residues clustering together into a

Tristan Croll 24 Nov 23, 2022
Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters"

Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters" Pipeline of CLIP-Adapter CLIP-Adapter is a drop-in modul

peng gao 157 Dec 26, 2022
Encode and decode text application

Text Encoder and Decoder Encode and decode text in many ways using this application! Encode in: ASCII85 Base85 Base64 Base32 Base16 Url MD5 Hash SHA-1

Alice 1 Feb 12, 2022
GemNet model in PyTorch, as proposed in "GemNet: Universal Directional Graph Neural Networks for Molecules" (NeurIPS 2021)

GemNet: Universal Directional Graph Neural Networks for Molecules Reference implementation in PyTorch of the geometric message passing neural network

Data Analytics and Machine Learning Group 124 Dec 30, 2022