Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)

Related tags

Deep LearningBAAF-Net
Overview

Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)

PWC
PWC
PWC
PWC

This repository is for BAAF-Net introduced in the following paper:

"Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion"
Shi Qiu, Saeed Anwar, Nick Barnes
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021)

Paper and Citation

The paper can be downloaded from here (CVF) or here (arXiv).
If you find our paper/codes/results are useful, please cite:

@inproceedings{qiu2021semantic,
  title={Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion},
  author={Qiu, Shi and Anwar, Saeed and Barnes, Nick},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={1757-1767},
  year={2021}
}

Updates

  • 04/05/2021 Results for S3DIS dataset (mIoU: 72.2%, OA: 88.9%, mAcc: 83.1%) are available now.
  • 04/05/2021 Test results (sequence 11-21: mIoU: 59.9%, OA: 89.8%) for SemanticKITTI dataset are available now.
  • 04/05/2021 Validation results (sequence 08: mIoU: 58.7%, OA: 91.3%) for SemanticKITTI are available now.
  • 28/05/2021 Pretrained models can be downloaded on all 6 areas of S3DIS dataset are available at google drive.
  • 28/05/2021 codes released!

Settings

  • The project is tested on Python 3.6, Tensorflow 1.13.1 and cuda 10.0
  • Then install the dependencies: pip install -r helper_requirements.txt
  • And compile the cuda-based operators: sh compile_op.sh
    (Note: may change the cuda root directory CUDA_ROOT in ./util/sampling/compile_ops.sh)

Dataset

  • Download S3DIS dataset from here.
  • Unzip and move the folder Stanford3dDataset_v1.2_Aligned_Version to ./data.
  • Run: python utils/data_prepare_s3dis.py
    (Note: may specify other directory as dataset_path in ./util/data_prepare_s3dis.py)

Training/Test

  • Training:
python -B main_S3DIS.py --gpu 0 --mode train --test_area 5

(Note: specify the --test_area from 1~6)

  • Test:
python -B main_S3DIS.py --gpu 0 --mode test --test_area 5 --model_path 'pretrained/Area5/snap-32251'

(Note: specify the --test_area index and the trained model path --model_path)

6-fold Cross Validation

  • Conduct training and test on each area.
  • Extract all test results, Area_1_conferenceRoom_1.ply ... Area_6_pantry_1.ply (272 .ply files in total), to the folder ./data/results
  • Run: python utils/6_fold_cv.py
    (Note: may change the target folder original_data_dir and the test results base_dir in ./util/6_fold_cv.py)

Pretrained Models and Results on S3DIS Dataset

  • BAAF-Net pretrained models on all 6 areas can be downloaded from google drive.
  • Download our results (ply files) via google drive for visualizations/comparisons.
  • More Functions about loading/writing/etc. ply files can be found from here.

Results on SemanticKITTI Dataset

  • Online test results (sequence 11-21): mIoU: 59.9%, OA: 89.8%
  • Download our test results (sequence 11-21 label files) via google drive for visualizations/comparisons.

  • Validation results (sequence 08): mIoU: 58.7%, OA: 91.3%
  • Download our validation results (sequence 08 label files) via google drive for visualizations/comparisons.
  • Visualization tools can be found from semantic-kitti-api.

Acknowledgment

The code is built on RandLA-Net. We thank the authors for sharing the codes.

Owner
PhD student of ANU affiliated with Data61-CSIRO
Racing line optimization algorithm in python that uses Particle Swarm Optimization.

Racing Line Optimization with PSO This repository contains a racing line optimization algorithm in python that uses Particle Swarm Optimization. Requi

Parsa Dahesh 6 Dec 14, 2022
It is a simple library to speed up CLIP inference up to 3x (K80 GPU)

CLIP-ONNX It is a simple library to speed up CLIP inference up to 3x (K80 GPU) Usage Install clip-onnx module and requirements first. Use this trick !

Gerasimov Maxim 93 Dec 20, 2022
Unsupervised Image Generation with Infinite Generative Adversarial Networks

Unsupervised Image Generation with Infinite Generative Adversarial Networks Here is the implementation of MICGANs using DCGAN architecture on MNIST da

16 Dec 24, 2021
A fast poisson image editing implementation that can utilize multi-core CPU or GPU to handle a high-resolution image input.

Poisson Image Editing - A Parallel Implementation Jiayi Weng (jiayiwen), Zixu Chen (zixuc) Poisson Image Editing is a technique that can fuse two imag

Jiayi Weng 110 Dec 27, 2022
A Keras implementation of YOLOv4 (Tensorflow backend)

keras-yolo4 请使用更完善的版本: https://github.com/miemie2013/Keras-YOLOv4 Please visit here for more complete model: https://github.com/miemie2013/Keras-YOLOv

384 Nov 29, 2022
How to Become More Salient? Surfacing Representation Biases of the Saliency Prediction Model

How to Become More Salient? Surfacing Representation Biases of the Saliency Prediction Model

Bogdan Kulynych 49 Nov 05, 2022
The official implementation of Theme Transformer

Theme Transformer This is the official implementation of Theme Transformer. Checkout our demo and paper : Demo | arXiv Environment: using python versi

Ian Shih 85 Dec 08, 2022
Groceries ARL: Association Rules (Birliktelik Kuralı)

Groceries_ARL Association Rules (Birliktelik Kuralı) Birliktelik kuralları, mark

Şebnem 5 Feb 08, 2022
Explaining in Style: Training a GAN to explain a classifier in StyleSpace

Explaining in Style: Official TensorFlow Colab Explaining in Style: Training a GAN to explain a classifier in StyleSpace Oran Lang, Yossi Gandelsman,

Google 197 Nov 08, 2022
Code to accompany our paper "Continual Learning Through Synaptic Intelligence" ICML 2017

Continual Learning Through Synaptic Intelligence This repository contains code to reproduce the key findings of our path integral approach to prevent

Ganguli Lab 82 Nov 03, 2022
Generates all variables from your .tf files into a variables.tf file.

tfvg Generates all variables from your .tf files into a variables.tf file. It searches for every var.variable_name in your .tf files and generates a v

1 Dec 01, 2022
RLDS stands for Reinforcement Learning Datasets

RLDS RLDS stands for Reinforcement Learning Datasets and it is an ecosystem of tools to store, retrieve and manipulate episodic data in the context of

Google Research 135 Jan 01, 2023
DUE: End-to-End Document Understanding Benchmark

This is the repository that provide tools to download data, reproduce the baseline results and evaluation. What can you achieve with this guide Based

21 Dec 29, 2022
FTIR-Deep Learning - FTIR Deep Learning With Python

CANDIY-spectrum Human analyis of chemical spectra such as Mass Spectra (MS), Inf

Wei Mei 1 Jan 03, 2022
Computationally Efficient Optimization of Plackett-Luce Ranking Models for Relevance and Fairness

Computationally Efficient Optimization of Plackett-Luce Ranking Models for Relevance and Fairness This repository contains the code used for the exper

H.R. Oosterhuis 28 Nov 29, 2022
RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation

RIFE - Real Time Video Interpolation arXiv | YouTube | Colab | Tutorial | Demo Table of Contents Introduction Collection Usage Evaluation Training and

hzwer 3k Jan 04, 2023
Pytorch Implementation of Value Retrieval with Arbitrary Queries for Form-like Documents.

Value Retrieval with Arbitrary Queries for Form-like Documents Introduction Pytorch Implementation of Value Retrieval with Arbitrary Queries for Form-

Salesforce 13 Sep 15, 2022
DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical Image Segmentation

DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical Image Segmentation By Qing Xu, Wenting Duan and Na He Requirements pytorch==1.1

Qing Xu 20 Dec 09, 2022
This is the paddle code for SeBoW(Self-Born wiring for neural trees), a kind of neural tree born form a large search space

SeBoW: Self-Born Wiring for neural trees(PaddlePaddle version) This is the paddle code for SeBoW(Self-Born wiring for neural trees), a kind of neural

HollyLee 13 Dec 08, 2022
A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.

Layer-wise Relevance Propagation (LRP) in PyTorch Basic unsupervised implementation of Layer-wise Relevance Propagation (Bach et al., Montavon et al.)

Kai Fabi 28 Dec 26, 2022