This is the code related to "Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation" (ICCV 2021).

Related tags

Deep LearningDsCML
Overview

Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation

This is the code related to "Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation" (ICCV 2021).

1. Paper

Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation
IEEE International Conference on Computer Vision (ICCV 2021)

If you find it helpful to your research, please cite as follows:

@inproceedings{peng2021sparse,
  title={Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation},
  author={Peng, Duo and Lei, Yinjie and Li, Wen and Zhang, Pingping and Guo, Yulan},
  booktitle={Proceedings of the International Conference on Computer Vision (ICCV)},
  year={2021},
  publisher={IEEE}
}

2. Preparation

You can follow the next steps to install the requairmented environment. This code is mainly modified from xMUDA, you can also refer to its README if the installation isn't going well.

2.1 Setup a Conda environment:

First, you are recommended to create a new Conda environment named nuscenes.

conda create --name nuscenes python=3.7

You can enable the virtual environment using:

conda activate nuscenes 

To deactivate the virtual environment, use:

source deactivate

2.2 Install nuscenes-devkit:

Download the devkit to your computer, decompress and enter it.

Add the python-sdk directory to your PYTHONPATH environmental variable, by adding the following to your ~/.bashrc:

export PYTHONPATH="${PYTHONPATH}:$HOME/nuscenes-devkit/python-sdk"

Using cmd (make sure the environment "nuscenes" is activated) to install the base environment:

pip install -r setup/requirements.txt

Setup environment variable:

export NUSCENES="/data/sets/nuscenes"

Using the cmd to finally install it:

pip install nuscenes-devkit

After the above steps, the devikit is installed, for any question you can refer to devikit_installation_help

If you meet the error with "pycocotools", you can try following steps:

(1) Install Cython in your environment:

sudo apt-get installl Cython
pip install cython

(2) Download the cocoapi to your computer, decompress and enter it.

(3) Using cmd to enter the path under "PythonAPI", type:

make

(4) Type:

pip install pycocotools

2.3 Install SparseConveNet:

Download the SparseConveNet to your computer, decompress, enter and develop it:

cd SparseConvNet/
bash develop.sh

3. Datasets Preparation

For Dataset preprocessing, the code and steps are highly borrowed from xMUDA, you can see more preprocessing details from this Link. We summarize the preprocessing as follows:

3.1 NuScenes

Download Nuscenes from NuScenes website and extract it.

Before training, you need to perform preprocessing to generate the data first. Please edit the script DsCML/data/nuscenes/preprocess.py as follows and then run it.

root_dir should point to the root directory of the NuScenes dataset

out_dir should point to the desired output directory to store the pickle files

3.2 A2D2

Download the A2D2 Semantic Segmentation dataset and Sensor Configuration from the Audi website

Similar to NuScenes preprocessing, please save all points that project into the front camera image as well as the segmentation labels to a pickle file.

Please edit the script DsCML/data/a2d2/preprocess.py as follows and then run it.

root_dir should point to the root directory of the A2D2 dataset

out_dir should point to the desired output directory to store the undistorted images and pickle files.

It should be set differently than the root_dir to prevent overwriting of images.

3.3 SemanticKITTI

Download the files from the SemanticKITTI website and additionally the color data from the Kitti Odometry website. Extract everything into the same folder.

Please edit the script DsCML/data/semantic_kitti/preprocess.py as follows and then run it.

root_dir should point to the root directory of the SemanticKITTI dataset out_dir should point to the desired output directory to store the pickle files

4. Usage

You can training the DsCML by using cmd or IDE such as Pycharm.

python DsCML/train_DsCML.py --cfg=../configs/nuscenes/day_night/xmuda.yaml

The output will be written to /home/<user>/workspace by default. You can change the path OUTPUT_DIR in the config file in (e.g. configs/nuscenes/day_night/xmuda.yaml)

You can start the trainings on the other UDA scenarios (USA/Singapore and A2D2/SemanticKITTI):

python DsCML/train_DsCML.py --cfg=../configs/nuscenes/usa_singapore/xmuda.yaml
python DsCML/train_DsCML.py --cfg=../configs/a2d2_semantic_kitti/xmuda.yaml

5. Results

We present several qualitative results reported in our paper.

Update Status

The code of CMAL is updated. (2021-10-04)

A simple python library for fast image generation of people who do not exist.

Random Face A simple python library for fast image generation of people who do not exist. For more details, please refer to the [paper](https://arxiv.

Sergei Belousov 170 Dec 15, 2022
MakeItTalk: Speaker-Aware Talking-Head Animation

MakeItTalk: Speaker-Aware Talking-Head Animation This is the code repository implementing the paper: MakeItTalk: Speaker-Aware Talking-Head Animation

Adobe Research 285 Jan 08, 2023
CLIP2Video: Mastering Video-Text Retrieval via Image CLIP

CLIP2Video: Mastering Video-Text Retrieval via Image CLIP The implementation of paper CLIP2Video: Mastering Video-Text Retrieval via Image CLIP. CLIP2

168 Dec 29, 2022
PyTorch package for the discrete VAE used for DALL·E.

Overview [Blog] [Paper] [Model Card] [Usage] This is the official PyTorch package for the discrete VAE used for DALL·E. Installation Before running th

OpenAI 9.5k Jan 05, 2023
Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".

VL-BERT By Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai. This repository is an official implementation of the paper VL-BERT:

Weijie Su 698 Dec 18, 2022
PyTorch implementation of the WarpedGANSpace: Finding non-linear RBF paths in GAN latent space (ICCV 2021)

Authors official PyTorch implementation of the "WarpedGANSpace: Finding non-linear RBF paths in GAN latent space" [ICCV 2021].

Christos Tzelepis 100 Dec 06, 2022
Comp445 project - Data Communications & Computer Networks

COMP-445 Data Communications & Computer Networks Change Python version in Conda

Peng Zhao 2 Oct 03, 2022
Official implementation of "Accelerating Reinforcement Learning with Learned Skill Priors", Pertsch et al., CoRL 2020

Accelerating Reinforcement Learning with Learned Skill Priors [Project Website] [Paper] Karl Pertsch1, Youngwoon Lee1, Joseph Lim1 1CLVR Lab, Universi

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 134 Dec 06, 2022
NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering Paper: https://arxiv.org/abs/2103.00762 Running Run on the provided DTU scene cd run ba

Fanbo Xiang 67 Dec 28, 2022
[NeurIPS 2021] PyTorch Code for Accelerating Robotic Reinforcement Learning with Parameterized Action Primitives

Robot Action Primitives (RAPS) This repository is the official implementation of Accelerating Robotic Reinforcement Learning via Parameterized Action

Murtaza Dalal 55 Dec 27, 2022
Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

111 Dec 29, 2022
PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR)

This is a PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR), using subpixel convolution to optimize the inference speed of TecoGAN VSR model. Please refer to the offi

789 Jan 04, 2023
This is an official implementation for "Self-Supervised Learning with Swin Transformers".

Self-Supervised Learning with Vision Transformers By Zhenda Xie*, Yutong Lin*, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao and Han Hu This repo is the

Swin Transformer 529 Jan 02, 2023
Pytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation"

Medical-Transformer Pytorch Code for the paper "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation" About this repo: This repo

Jeya Maria Jose 615 Dec 25, 2022
Unofficial implementation of "TTNet: Real-time temporal and spatial video analysis of table tennis" (CVPR 2020)

TTNet-Pytorch The implementation for the paper "TTNet: Real-time temporal and spatial video analysis of table tennis" An introduction of the project c

Nguyen Mau Dung 438 Dec 29, 2022
Deep Learning segmentation suite designed for 2D microscopy image segmentation

Deep Learning segmentation suite dessigned for 2D microscopy image segmentation This repository provides researchers with a code to try different enco

7 Nov 03, 2022
Pytorch implementation of ICASSP 2022 paper Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

IIGROUP 6 Sep 21, 2022
Conjugated Discrete Distributions for Distributional Reinforcement Learning (C2D)

Conjugated Discrete Distributions for Distributional Reinforcement Learning (C2D) Code & Data Appendix for Conjugated Discrete Distributions for Distr

1 Jan 11, 2022
Plugin adapted from Ultralytics to bring YOLOv5 into Napari

napari-yolov5 Plugin adapted from Ultralytics to bring YOLOv5 into Napari. Training and detection can be done using the GUI. Training dataset must be

2 May 05, 2022
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Karate Club is an unsupervised machine learning extension library for NetworkX. Please look at the Documentation, relevant Paper, Promo Video, and Ext

Benedek Rozemberczki 1.8k Jan 07, 2023