Code for Domain Adaptive Video Segmentation via Temporal Consistency Regularization in ICCV 2021

Related tags

Deep LearningDA-VSN
Overview

Domain Adaptive Video Segmentation via Temporal Consistency Regularization

Updates

Paper

Domain Adaptive Video Segmentation via Temporal Consistency Regularization

Dayan Guan, Jiaxing Huang, Xiao Aoran, Shijian Lu
School of Computer Science and Engineering, Nanyang Technological University, Singapore
International Conference on Computer Vision, 2021.

If you find this code useful for your research, please cite our paper:

@inproceedings{guan2021domain,
  title={Domain adaptive video segmentation via temporal consistency regularization},
  author={Guan, Dayan and Huang, Jiaxing and Xiao, Aoran and Lu, Shijian},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={8053--8064},
  year={2021}
}

Abstract

Video semantic segmentation is an essential task for the analysis and understanding of videos. Recent efforts largely focus on supervised video segmentation by learning from fully annotated data, but the learnt models often experience clear performance drop while applied to videos of a different domain. This paper presents DA-VSN, a domain adaptive video segmentation network that addresses domain gaps in videos by temporal consistency regularization (TCR) for consecutive frames of target-domain videos. DA-VSN consists of two novel and complementary designs. The first is cross-domain TCR that guides the prediction of target frames to have similar temporal consistency as that of source frames (learnt from annotated source data) via adversarial learning. The second is intra-domain TCR that guides unconfident predictions of target frames to have similar temporal consistency as confident predictions of target frames. Extensive experiments demonstrate the superiority of our proposed domain adaptive video segmentation network which outperforms multiple baselines consistently by large margins.

Installation

  1. Conda enviroment:
conda create -n DA-VSN python=3.6
conda activate DA-VSN
conda install -c menpo opencv
pip install torch==1.2.0 torchvision==0.4.0
  1. Clone the ADVENT:
git clone https://github.com/valeoai/ADVENT.git
pip install -e ./ADVENT
  1. Clone the repo:
git clone https://github.com/Dayan-Guan/DA-VSN.git
pip install -e ./DA-VSN

Preparation

  1. Dataset:
DA-VSN/data/Cityscapes/                       % Cityscapes dataset root
DA-VSN/data/Cityscapes/leftImg8bit_sequence   % leftImg8bit_sequence_trainvaltest
DA-VSN/data/Cityscapes/gtFine                 % gtFine_trainvaltest
DA-VSN/data/Viper/                            % VIPER dataset root
DA-VSN/data/Viper/train/img                   % Modality: Images; Frames: *[0-9]; Sequences: 00-77; Format: jpg
DA-VSN/data/Viper/train/cls                   % Modality: Semantic class labels; Frames: *0; Sequences: 00-77; Format: png
DA-VSN/data/SynthiaSeq/                      % SYNTHIA-Seq dataset root
DA-VSN/data/SynthiaSeq/SEQS-04-DAWN          % SYNTHIA-SEQS-04-DAWN
  1. Pre-trained models: Download pre-trained models and put in DA-VSN/pretrained_models

Optical Flow Estimation

  • For quick preparation: Download the optical flow estimated from Cityscapes-Seq validation set here and unzip in DA-VSN/data
  1. Clone the flownet2-pytorch:
git clone https://github.com/NVIDIA/flownet2-pytorch.git
  1. Download pre-trained FlowNet2 and put in flownet2-pytorch/pretrained_models
DA-VSN/data/Cityscapes_val_optical_flow_scale512/  % unzip Cityscapes_val_optical_flow_scale512.zip
  1. Use the flownet2-pytorch to estimate optical flow

Evaluation on Pretrained Models

  • VIPER → Cityscapes-Seq:
cd DA-VSN/davsn/scripts
python test.py --cfg configs/davsn_viper2city_pretrained.yml
  • SYNTHIA-Seq → Cityscapes-Seq:
python test.py --cfg configs/davsn_syn2city_pretrained.yml

Training and Testing

  • VIPER → Cityscapes-Seq:
cd DA-VSN/davsn/scripts
python train.py --cfg configs/davsn_viper2city.yml
python test.py --cfg configs/davsn_viper2city.yml
  • SYNTHIA-Seq → Cityscapes-Seq:
python train.py --cfg configs/davsn_syn2city.yml
python test.py --cfg configs/davsn_syn2city.yml

Acknowledgements

This codebase is heavily borrowed from ADVENT and flownet2-pytorch.

Contact

If you have any questions, please contact: [email protected]

Comments
  • Optical flow is not used for propagating

    Optical flow is not used for propagating

    Hi, author. I have two questions. The first is I find that you didn't use flow to propogate previous frame to current frame. You just use it as a limitation that the pixel appeared in both cf and kf will be retained. This is unreasonable. image And I refine the code using resample2D to warp kf to cf, but the result only improve a little.

    The second question is that I try to train DAVSN for 3 times on 1080Ti and 2080Ti following the setting you gave, but I only get 46 mIoU which is 2 point less than you.

    opened by EDENpraseHAZARD 5
  • Question on Synthia-seq dataset

    Question on Synthia-seq dataset

    Dear authors,

    Thank you for your great work. I have several questions about the synthia-seq->cityscape-seq adaptation. The first one is about the scale of training data. It seems like compared with the VIPER dataset, synthia-seq only contains one labeled video with 850 frames in total. Is that true? And the second question is that 11 classes are reported the Table 4, but in the dataloader of synthia-seq, 12 classes are used. So, I'm not sure whether the fence class is considered during adaptation or not. https://github.com/Dayan-Guan/DA-VSN/blob/d110ff70dacec4156a3787eb49e7f2448dfb91a5/davsn/dataset/SynthiaSeq.py#L11

    Thanks in advance for your help!

    opened by xyIsHere 3
  • Details of SYNTHIA-Seq dataset

    Details of SYNTHIA-Seq dataset

    Hi author, I have downloaded SYNTHIA-Seq, but I found there are 'Stereo_Left' and 'Stereo_Right' folders. And each contains 'Omni_B', 'Omni_F', 'Omni_L' and 'Omni_R'. I wonder which one is used for training.

    opened by EDENpraseHAZARD 2
  • Could you please provide 'estimated_optical_flow' for training DA-VSN

    Could you please provide 'estimated_optical_flow' for training DA-VSN

    Hi @Dayan-Guan , thank you for open-sourcing your work!

    I am trying to follow this work. For training DA-VSN from scratch, the optical flows (for the 3 datasets used in your paper) estimated by FlowNet2 are needed. However, the instruction in your README only includes the evaluation part. I also see from the recent issues that you have provided the code and more instructions for the training part. But the code is not a complete one I guess so I cannot generate the optical flows with it.

    Could you please provide your generated optical flows for all 3 datasets used in your paper? It would save us time. Or could you please have a look again at the provided 'Code_for_optical_flow_estimation'? So that it is runnable for generating optical flows on our own.

    Thanks in advance!

    Regards

    opened by ldkong1205 1
  • In train_video_UDA.py, line 251, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), if the image flips, but the optical flow does not flip

    In train_video_UDA.py, line 251, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), if the image flips, but the optical flow does not flip

    Hello! I really enjoy reading your work!! At the same time, I encountered a problem in the operation of train_video_UDA.py

    In line 251 trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), Variable trg_prob is the prediction of trg_img_b_wk, and trg_img_b_wk is obtained by trg_img_b based on a certain probability of flip, but trg_flow_warp does not seem to be flipped, We consider such a situation, If trg_img_b_wk is fliped, trg_flow_warp is not flipped, Then trg_prob_warp and trg_img_d_st do not seem consistent? Because the image flips, but the optical flow does not flip. Although the trg_pl in line 256~258 is fliped.

    Chinese discription of my question: 在第251行, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), 变量trg_prob是trg_img_b_wk的语义分割预测, 而trg_img_b_wk是由trg_img_b根据一定概率flip得到的, 但 trg_flow_warp似乎没有进行翻转, 我们考虑这样一种情况, 如果trg_img_b_wk经过了flip处理, 那么trg_prob_warp和trg_img_d_st的语义貌似不是一致的?因为图像flip了但光流图没有flip。 尽管在第256行对trg_pl进行了flip操作

    opened by zhe-juanz 0
  • Some questions about data loading

    Some questions about data loading

    Hi, This is a very enlightening work!!! @xing0047 @Dayan-Guan I want to ask a question~

    When I use./TPS/tps/scripts/train.py to read SynthiaSeq or ViperSeq data, I debug the code and find the following phenomena:

    I tried to print some variables of __ getitem__ () ,

    When the shuffle of source_loader = data.DataLoader() is set to False, and the batch_size=cfg.TRAIN.BATCH_SIZE_SOURCE is set to 1,

    1. It is found that although the batch_ Size=1, but 4 pictures and the first frame corresponding to them are loaded at one time, Instead of 1 picture and the previous frame.

    2. At the same time, it is found that 4 loaded pictures are disordered, such as 2-1-3-4, rather than 1-2-3-4, it seems to violate the settings of shuffle.

    Could you please kindly explain my doubt? Thank you very much!!

    The print code are as follows:

    111

    The print results are as follows,which the order of each run of print is different:

    ---index--- 1 ---index--- 0 ---index--- 2 img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000002.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000002.png ---index--- 3 img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000001.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000001.png img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000003.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000003.png img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000004.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000004.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000003.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000002.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000001.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000000.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000003.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000002.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000001.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000000.png

    opened by zhe-juanz 0
  • Regarding Synthia-Seq Dataset

    Regarding Synthia-Seq Dataset

    I really enjoyed reading your work. I have a question regarding the synthia-seq dataset. In the paper you mention that you have used 8000 synthesized video frames, but in the github the Synthia-Seq Dawn contain only 850 images. Can you please clarify this ambiguity. Thank you. image

    opened by Ihsan149 0
  • Optical flow for training

    Optical flow for training

    Thanks for your great job! I want to train DA-VSN, but I don't know how to get Estimated_optical_flow_Viper_train, Estimated_optical_flow_Cityscapes-Seq_train. I didn't find the detail about optical flow from readme or paper.

    opened by EDENpraseHAZARD 11
Official project repository for 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination'

NCAE_UAD Official project repository of 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination' Abstract In this p

Jongmin Andrew Yu 2 Feb 10, 2022
A PyTorch Implementation of Single Shot Scale-invariant Face Detector.

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector. Eval python wider_eval_pytorch.

carwin 235 Jan 07, 2023
BiSeNet based on pytorch

BiSeNet BiSeNet based on pytorch 0.4.1 and python 3.6 Dataset Download CamVid dataset from Google Drive or Baidu Yun(6xw4). Pretrained model Download

367 Dec 26, 2022
Understanding Convolution for Semantic Segmentation

TuSimple-DUC by Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, and Garrison Cottrell. Introduction This repository is for Under

TuSimple 585 Dec 31, 2022
Code repository for "Free View Synthesis", ECCV 2020.

Free View Synthesis Code repository for "Free View Synthesis", ECCV 2020. Setup Install the following Python packages in your Python environment - num

Intelligent Systems Lab Org 253 Dec 07, 2022
A Closer Look at Invalid Action Masking in Policy Gradient Algorithms

A Closer Look at Invalid Action Masking in Policy Gradient Algorithms This repo contains the source code to reproduce the results in the paper A Close

Costa Huang 73 Dec 24, 2022
Very deep VAEs in JAX/Flax

Very Deep VAEs in JAX/Flax Implementation of the experiments in the paper Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on I

Jamie Townsend 42 Dec 12, 2022
Repository for benchmarking graph neural networks

Benchmarking Graph Neural Networks Updates Nov 2, 2020 Project based on DGL 0.4.2. See the relevant dependencies defined in the environment yml files

NTU Graph Deep Learning Lab 2k Jan 03, 2023
U-Net for GBM

My Final Year Project(FYP) In National University of Singapore(NUS) You need Pytorch(stable 1.9.1) Both cuda version and cpu version are OK File Str

PinkR1ver 1 Oct 27, 2021
Empowering journalists and whistleblowers

Onymochat Empowering journalists and whistleblowers Onymochat is an end-to-end encrypted, decentralized, anonymous chat application. You can also host

Samrat Dutta 19 Sep 02, 2022
Fine-grained Control of Image Caption Generation with Abstract Scene Graphs

Faster R-CNN pretrained on VisualGenome This repository modifies maskrcnn-benchmark for object detection and attribute prediction on VisualGenome data

Shizhe Chen 7 Apr 20, 2021
Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation

Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation The skip connections in U-Net pass features from the levels of enc

Boheng Cao 1 Dec 29, 2021
Pytorch implementation of our paper accepted by NeurIPS 2021 -- Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021) (Link) Overview Prerequisites Linu

Shaojie Li 34 Mar 31, 2022
An Object Oriented Programming (OOP) interface for Ontology Web language (OWL) ontologies.

Enabling a developer to use Ontology Web Language (OWL) along with its reasoning capabilities in an Object Oriented Programming (OOP) paradigm, by pro

TheEngineRoom-UniGe 7 Sep 23, 2022
Tello Drone Trajectory Tracking

With this library you can track the trajectory of your tello drone or swarm of drones in real time.

Kamran Asgarov 2 Oct 12, 2022
Deep Ensemble Learning with Jet-Like architecture

Ransomware analysis using DEL with jet-like architecture comprising two CNN wings, a sparse AE tail, a non-linear PCA to produce a diverse feature space, and an MLP nose

Ahsen Nazir 2 Feb 06, 2022
People movement type classifier with YOLOv4 detection and SORT tracking.

Movement classification The goal of this project would be movement classification of people, in other words, walking (normal and fast) and running. Yo

4 Sep 21, 2021
Laser device for neutralizing - mosquitoes, weeds and pests

Laser device for neutralizing - mosquitoes, weeds and pests (in progress) Here I will post information for creating a laser device. A warning!! How It

Ildaron 1k Jan 02, 2023
Data reduction pipeline for KOALA on the AAT.

KOALA KOALA, the Kilofibre Optical AAT Lenslet Array, is a wide-field, high efficiency, integral field unit used by the AAOmega spectrograph on the 3.

4 Sep 26, 2022
A Re-implementation of the paper "A Deep Learning Framework for Character Motion Synthesis and Editing"

What is This This is a simple re-implementation of the paper "A Deep Learning Framework for Character Motion Synthesis and Editing"(1). Only Sections

102 Dec 14, 2022