Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

Related tags

Deep Learningcorda
Overview

CorDA

Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation alt text

Prerequisite

Please create and activate the following conda envrionment

# It may take several minutes for conda to solve the environment
conda env create -f environment.yml
conda activate corda 

Code was tested on a V100 with 16G Memory.

Train a CorDA model

# Train for the SYNTHIA2Cityscapes task
bash run_synthia_stereo.sh
# Train for the GTA2Cityscapes task
bash run_gta.sh

Test the trained model

bash shells/eval_syn2city.sh
bash shells/eval_gta2city.sh

Pre-trained models are provided (Google Drive). Please put them in ./checkpoint.

  • The provided SYNTHIA2Cityscapes model achieves 56.3 mIoU (16 classes) at the end of the training.
  • The provided GTA2Cityscapes model achieves 57.7 mIoU (19 classes) at the end of the training.

Reported Results on SYNTHIA2Cityscapes

Method mIoU*(13) mIoU(16)
CBST 48.9 42.6
FDA 52.5 -
DADA 49.8 42.6
DACS 54.8 48.3
CorDA 62.8 55.0

Citation

Please cite our work if you find it useful.

@article{wang2021domain,
  title={Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation},
  author={Wang, Qin and Dai, Dengxin and Hoyer, Lukas and Fink, Olga and Van Gool, Luc},
  journal={arXiv preprint arXiv:2104.13613},
  year={2021}
}

Acknowledgement

  • DACS is used as our codebase and our DA baseline official
  • SFSU as the source of stereo Cityscapes depth estimation Official

Data links

For questions regarding the code, please contact [email protected] .

Comments
  • Training on a custom dataset without ground truth label

    Training on a custom dataset without ground truth label

    From what I understand after reading your paper, you do not need ground truth label data on the target domain to train the pseudo labels. However, when I look at cityscapes_loader, it seems I need to supply the ground truth seg maps as well.

    I am trying to train the network on a custom dataset (that only depth maps, and ground truth seg map only on the source domain), but it looks I cannot get away without providing it. Do you have any thoughts on this?

    opened by chophilip21 6
  • Coufusion about the 'depth' of cityscapes

    Coufusion about the 'depth' of cityscapes

    Hello, nice work but i meet some question.

    in 'data/cityscapes_loader.py' line 181-183:

    depth = cv2.imread(depth_path, flags=cv2.IMREAD_ANYDEPTH).astype(np.float32) / 256. + 1. if depth.shape != lbl.shape: depth = cv2.resize(depth, lbl.shape[::-1], interpolation=cv2.INTER_NEAREST) Monocular depth: in disparity form 0 - 65535

    (1) Why the depth is calculated from x/256+1
    (2) is it the depth or the disparity ? In the official doc of cityscapes, it say disparity = (x-1)/256

    Thank you!

    opened by ganyz 6
  • gta2city

    gta2city

    When I revisited the performance of your GTA2City, I found that the MIOU could only reach about 54.8 after 250,000 iterations. I didn't change anything except the 10.2 version of CUDA. Could you please provide the training log of your GTA2City? Thanks a lot!!

    opened by xiaoachen98 6
  • Question about the pretrained parameters of backbone

    Question about the pretrained parameters of backbone

    Thanks for sharing the code, and it brings the amazing improvement for this filed.

    I notice that you have used backbone with parameters pretrained on MSCOCO which is the same with DACS, and have you tried backbone pretrained on ImageNet? If yes, could you please provide the corresponding results?

    opened by super233 4
  • About intrinsics used in GTA depth estimation

    About intrinsics used in GTA depth estimation

    Thanks a lot for your fantastic work. When I followed your depth estimation mentioned in issue#7, I went to the https://playing-for-benchmarks.org. However,its camera calibration doesn't include intrinsic matrix directly, which is needed in Monodepth2 depth estimation. Would you kindly share the intrinsic of GTA you used in depth estimation? Or may I know a way to convert GTA's projection matrix to intrinsic matrix?

    opened by Ichinose0code 2
  • Why does the class Train have 0 mIoU, What may could happen

    Why does the class Train have 0 mIoU, What may could happen

    I download your pretrained model, and start demo But I find train iou 0.0

    (yy_corda) [email protected]:/media/ailab/data/yy/corda$ bash shells/eval_gta2city.sh
    ./checkpoint/gta
    Found 500 val images
    Evaluating, found 500 batches.
    100 processed
    200 processed
    300 processed
    400 processed
    500 processed
    class  0 road         IU 94.81
    class  1 sidewalk     IU 62.18
    class  2 building     IU 88.03
    class  3 wall         IU 33.09
    class  4 fence        IU 43.51
    class  5 pole         IU 39.93
    class  6 traffic_light IU 49.46
    class  7 traffic_sign IU 54.68
    class  8 vegetation   IU 88.01
    class  9 terrain      IU 47.67
    class 10 sky          IU 89.22
    class 11 person       IU 68.22
    class 12 rider        IU 39.21
    class 13 car          IU 90.25
    class 14 truck        IU 51.43
    class 15 bus          IU 58.37
    class 16 train        IU 0.00
    class 17 motorcycle   IU 40.38
    class 18 bicycle      IU 57.42
    meanIOU: 0.5767768805758403
    

    I train my model on it, and test eval_syn2city.py. Here are 3 classes Iou 0.0 because missing classed in source domain. but I download pretrained model ,and run eval_gta2city.sh still miss one class train. So, I want to know why. Is it may train class didn't appear city datasets? So it's IOU is 0.

    opened by yuheyuan 2
  • How to obtain your depth datasets?

    How to obtain your depth datasets?

    Hi, thanks for your great work!

    It would be great if you can elaborate more on how you obtain the monocular depth estimation.

    I understand that you've uploaded the dataset, but it would be really helpful if I know exactly how you've done it.

    From your paper, in the ablation study part: "We would like to highlight that for both stereo and monocular depth estimations, only stereo pairs or image sequences from the same dataset are used to train and generate the pseudo depth estimation model. As no data from external datasets is used, and stereo pairs and image sequences are relatively easy to obtain, our proposal of using self-supervised depth have the potential to be effectively realized in real-world applications."

    So I image you get your monocular depth pseudo ground truth by:

    1. Downloading target domain videos (here Cityscapes. Btw, where do you get Cityscapes videos?)
    2. Train a Monodepth2 model on those videos (for how long?)
    3. Use the model to get pseudo ground truth Then repeat to the source domain (GTA 5 or Synthia)

    Am I getting it right? And is there any more important points you want to highlight when calculating such depth labels?

    Regards, Tu

    opened by tudragon154203 2
  • How to continue train?

    How to continue train?

    when I use script llike

    CUDA_VISIBLE_DEVICES=0 python3 -u trainUDA_gta.py --config ./configs/configUDA_gta2city.json --name UDA-gta --resume /saved/DeepLabv2-depth-gtamono-cityscapestereo/05-03_02-13-UDA-gta/checkpoint-iter95000.pth | tee ./gta-corda.log

    It would run again but the new checkpoint would be saved.

    opened by ygjwd12345 2
  • Warning:optimizer contains a parameter group with duplicate parameters

    Warning:optimizer contains a parameter group with duplicate parameters

    I follow you code, and train a model. But it results may not meet the need.

    I eval the model you share .

    bash shells/eval_syn2city.sh
    

    your share model. in syn2city : 19 classes : meanIout: 0.4771 I train the model: in syn2city : 19 classes : meanIout: only: 0.46.7

    in the train. I find the warning, So I want to know if it may cause the result drop.

    /home/ailab/anaconda3/envs/yy_CORDA/lib/python3.7/site-packages/torch/optim/sgd.py:68: UserWarning: optimizer contains a parameter group with duplicate parameters; in future, this will cause an error; see github.com/pytorch/pytorch/issues/40967 for more information
      super(SGD, self).__init__(params, defaults)
    D_init tensor(134.8489, device='cuda:0', grad_fn=<DivBackward0>) D tensor(134.5171, device='cuda:0', grad_fn=<DivBackward0>)
    
    opened by yuheyuan 1
  • May deeplabv2_synthia.py have extra space symbol

    May deeplabv2_synthia.py have extra space symbol

    if the forward code, return out ,an extra space symbol

       def forward(self, x):
            out = self.conv2d_list[0](x)
            for i in range(len(self.conv2d_list)-1):
                out += self.conv2d_list[i+1](x)
                return out
    

    this is the code in your code

    class Classifier_Module(nn.Module):
    
        def __init__(self, dilation_series, padding_series, num_classes):
            super(Classifier_Module, self).__init__()
            self.conv2d_list = nn.ModuleList()
            for dilation, padding in zip(dilation_series, padding_series):
                self.conv2d_list.append(nn.Conv2d(256, num_classes, kernel_size=3, stride=1, padding=padding, dilation=dilation, bias = True))
    
            for m in self.conv2d_list:
                m.weight.data.normal_(0, 0.01)
    
        def forward(self, x):
            out = self.conv2d_list[0](x)
            for i in range(len(self.conv2d_list)-1):
                out += self.conv2d_list[i+1](x)
                return out
    
    

    I this this forward is possible.beaceuse your code, use list contain four elements,if return out have space, this may do only twice without fourth

       def forward(self, x):
            out = self.conv2d_list[0](x)
            for i in range(len(self.conv2d_list)-1):
                out += self.conv2d_list[i+1](x)
            return out
    
       self.__make_pred_layer(Classifier_Module,[6,12,18,24],[6, 12,18, 24],NUM_OUTPUT[task]
    
       def _make_pred_layer(self,block, dilation_series, padding_series,num_classes):
            return block(dilation_series,padding_series,num_classes)
    
    opened by yuheyuan 1
  • checkpoints links fail

    checkpoints links fail

    I can't download the checkpoints file from your links, when click into the google drive, The file size is shown to be 2GB, but it was only 0B when downloaded

    opened by xiaoachen98 0
Owner
Qin Wang
PhD student @ ETH Zürich
Qin Wang
Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors

Make-A-Scene - PyTorch Pytorch implementation (inofficial) of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors (https://arxiv.org/

Casual GAN Papers 259 Dec 28, 2022
Code for the paper: Audio-Visual Scene Analysis with Self-Supervised Multisensory Features

[Paper] [Project page] This repository contains code for the paper: Andrew Owens, Alexei A. Efros. Audio-Visual Scene Analysis with Self-Supervised Mu

Andrew Owens 202 Dec 13, 2022
Joint Gaussian Graphical Model Estimation: A Survey

Joint Gaussian Graphical Model Estimation: A Survey Test Models Fused graphical lasso [1] Group graphical lasso [1] Graphical lasso [1] Doubly joint s

Koyejo Lab 1 Aug 10, 2022
PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation.

ALiBi PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. Quickstart Clone this reposit

Jake Tae 4 Jul 27, 2022
FlingBot: The Unreasonable Effectiveness of Dynamic Manipulations for Cloth Unfolding

This repository contains code for training and evaluating FlingBot in both simulation and real-world settings on a dual-UR5 robot arm setup for Ubuntu 18.04

Columbia Artificial Intelligence and Robotics Lab 70 Dec 06, 2022
Extension to fastai for volumetric medical data

FAIMED 3D use fastai to quickly train fully three-dimensional models on radiological data Classification from faimed3d.all import * Load data in vari

Keno 26 Aug 22, 2022
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

Introduction QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and

Yu 1.4k Dec 30, 2022
This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised and Tiny ML scenarios"

TinyWeaklyIsolationForest This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised a

2 Mar 21, 2022
Code for ICCV 2021 paper Graph-to-3D: End-to-End Generation and Manipulation of 3D Scenes using Scene Graphs

Graph-to-3D This is the official implementation of the paper Graph-to-3d: End-to-End Generation and Manipulation of 3D Scenes Using Scene Graphs | arx

Helisa Dhamo 33 Jan 06, 2023
Official repository for Hierarchical Opacity Propagation for Image Matting

HOP-Matting Official repository for Hierarchical Opacity Propagation for Image Matting 🚧 🚧 🚧 Under Construction 🚧 🚧 🚧 🚧 🚧 🚧   Coming Soon   

Li Yaoyi 54 Dec 30, 2021
PyTea: PyTorch Tensor shape error analyzer

PyTea: PyTorch Tensor Shape Error Analyzer paper project page Requirements node.js = 12.x python = 3.8 z3-solver = 4.8 How to install and use # ins

ROPAS Lab. 240 Jan 02, 2023
PyTorch implementation of the wavelet analysis from Torrence & Compo

Continuous Wavelet Transforms in PyTorch This is a PyTorch implementation for the wavelet analysis outlined in Torrence and Compo (BAMS, 1998). The co

Tom Runia 262 Dec 21, 2022
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks (SDPoint) This repository contains the cod

Jason Kuen 17 Jul 04, 2022
This repository contains the code for the binaural-detection model used in the publication arXiv:2111.04637

This repository contains the code for the binaural-detection model used in the publication arXiv:2111.04637 Dependencies The model depends on the foll

Jörg Encke 2 Oct 14, 2022
A plug-and-play library for neural networks written in Python

A plug-and-play library for neural networks written in Python!

Dimos Michailidis 2 Jul 16, 2022
Phy-Q: A Benchmark for Physical Reasoning

Phy-Q: A Benchmark for Physical Reasoning Cheng Xue*, Vimukthini Pinto*, Chathura Gamage* Ekaterina Nikonova, Peng Zhang, Jochen Renz School of Comput

29 Dec 19, 2022
Code repo for "Transformer on a Diet" paper

Transformer on a Diet Reference: C Wang, Z Ye, A Zhang, Z Zhang, A Smola. "Transformer on a Diet". arXiv preprint arXiv (2020). Installation pip insta

cgraywang 31 Sep 26, 2021
Beginner-friendly repository for Hacktober Fest 2021. Start your contribution to open source through baby steps. 💜

Hacktober Fest 2021 🎉 Open source is changing the world – one contribution at a time! 🎉 This repository is made for beginners who are unfamiliar wit

Abhilash M Nair 32 Dec 11, 2022
NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

4.8k Jan 07, 2023
Explanatory Learning: Beyond Empiricism in Neural Networks

Explanatory Learning This is the official repository for "Explanatory Learning: Beyond Empiricism in Neural Networks". Datasets Download the datasets

GLADIA Research Group 10 Dec 06, 2022