Official page of Patchwork (RA-L'21 w/ IROS'21)

Overview

Patchwork

Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor", which is accepted by RA-L with IROS'21 option

[Video] [Preprint Paper] [Project Wiki]

Patchwork Concept of our method (CZM & GLE)

It's an overall updated version of R-GPF of ERASOR [Code] [Paper].


Demo

KITTI 00

Rough Terrain


Characteristics

  • Single hpp file (include/patchwork/patchwork.hpp)

  • Robust ground consistency

As shown in the demo videos and below figure, our method shows the most promising robust performance compared with other state-of-the-art methods, especially, our method focuses on the little perturbation of precision/recall as shown in this figure.

Please kindly note that the concept of traversable area and ground is quite different! Please refer to our paper.

Contents

  1. Test Env.
  2. Requirements
  3. How to Run Patchwork
  4. Citation

Test Env.

The code is tested successfully at

  • Linux 18.04 LTS
  • ROS Melodic

Requirements

ROS Setting

    1. Install ROS on a machine.
    1. Thereafter, jsk-visualization is required to visualize Ground Likelihood Estimation status.
sudo apt-get install ros-melodic-jsk-recognition
sudo apt-get install ros-melodic-jsk-common-msgs
sudo apt-get install ros-melodic-jsk-rviz-plugins
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
git clone https://github.com/LimHyungTae/patchwork.git
cd .. && catkin build patchwork 

How to Run Patchwork

We provide three examples

  • Offline KITTI dataset
  • Online (ROS Callback) KITTI dataset
  • Own dataset using pcd files

Offline KITTI dataset

  1. Download SemanticKITTI Odometry dataset (We also need labels since we also open the evaluation code! :)

  2. Set the data_path in launch/offline_kitti.launch for your machine.

The data_path consists of velodyne folder and labels folder as follows:

data_path (e.g. 00, 01, ..., or 10)
_____velodyne
     |___000000.bin
     |___000001.bin
     |___000002.bin
     |...
_____labels
     |___000000.label
     |___000001.label
     |___000002.label
     |...
_____...
   
  1. Run launch file
roslaunch patchwork offline_kitti.launch

You can directly feel the speed of Patchwork! 😉

Online (ROS Callback) KITTI dataset

We also provide rosbag example. If you run our patchwork via rosbag, please refer to this example.

  1. Download readymade rosbag
wget https://urserver.kaist.ac.kr/publicdata/patchwork/kitti_00_xyzilid.bag
  1. After building this package, run the roslaunch as follows:
roslaunch patchwork rosbag_kitti.launch
  1. Then play the rosbag file in another command
rosbag play kitti_00_xyzilid.bag

Own dataset using pcd files

Please refer to /nodes/offilne_own_data.cpp.

(Note that in your own data format, there may not exist ground truth labels!)

Be sure to set right params. Otherwise, your results may be wrong as follows:

W/ wrong params After setting right params

For better understanding of the parameters of Patchwork, please read our wiki, 4. IMPORTANT: Setting Parameters of Patchwork in Your Own Env..

Offline (Using *.pcd or *.bin file)

  1. Utilize /nodes/offilne_own_data.cpp

  2. Please check the output by following command and corresponding files:

roslaunch patchwork offline_ouster128.launch

Online (via rosbag)

  1. Utilize rosbag_kitti.launch.

  2. To do so, remap the topic of subscriber, e.g. add remap line as follows:

<remap from="/node" to="$YOUR_LIDAR_TOPIC_NAME$"/>
  1. In addition, minor modification of ros_kitti.cpp is necessary by refering to offline_own_data.cpp.

Citation

If you use our code or method in your work, please consider citing the following:

@article{lim2021patchwork,
title={Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor},
author={Lim, Hyungtae and Minho, Oh and Myung, Hyun},
journal={IEEE Robotics and Automation Letters},
year={2021}
}

Description

All explanations of parameters and other experimental results will be uploaded in wiki

Contact

If you have any questions, please let me know:

TODO List

  • Add ROS support
  • Add preprint paper
  • Add demo videos
  • Add own dataset examples
  • Update wiki

Owner
Hyungtae Lim
Ph.D Candidate of URL lab. @ KAIST, South Korea
Hyungtae Lim
Simple reimplemetation experiments about FcaNet

FcaNet-CIFAR An implementation of the paper FcaNet: Frequency Channel Attention Networks on CIFAR10/CIFAR100 dataset. how to run Code: python Cifar.py

76 Feb 04, 2021
Tiny-NewsRec: Efficient and Effective PLM-based News Recommendation

Tiny-NewsRec The source codes for our paper "Tiny-NewsRec: Efficient and Effective PLM-based News Recommendation". Requirements PyTorch == 1.6.0 Tensor

Yang Yu 3 Dec 07, 2022
Code for the paper: Hierarchical Reinforcement Learning With Timed Subgoals, published at NeurIPS 2021

Hierarchical reinforcement learning with Timed Subgoals (HiTS) This repository contains code for reproducing experiments from our paper "Hierarchical

Autonomous Learning Group 21 Dec 03, 2022
DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021)

DeepLM DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021) Run Please install th

Jingwei Huang 130 Dec 02, 2022
Learnable Multi-level Frequency Decomposition and Hierarchical Attention Mechanism for Generalized Face Presentation Attack Detection

LMFD-PAD Note This is the official repository of the paper: LMFD-PAD: Learnable Multi-level Frequency Decomposition and Hierarchical Attention Mechani

28 Dec 02, 2022
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation

CPT This repository contains code and checkpoints for CPT. CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Gener

fastNLP 341 Dec 29, 2022
Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation

Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation (AAAI 2021) Official pytorch implementation of our paper: Discriminative

Beom 74 Dec 27, 2022
DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe.

DeepLab Introduction DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe. It combines densely-compute

Ali 234 Nov 14, 2022
Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem

Benchmarking nearest neighbors Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but so far t

Erik Bernhardsson 3.2k Jan 03, 2023
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions

This is a Pytorch implementation of Janai, J., Güney, F., Ranjan, A., Black, M. and Geiger, A., Unsupervised Learning of Multi-Frame Optical Flow with

Anurag Ranjan 110 Nov 02, 2022
Wind Speed Prediction using LSTMs in PyTorch

Implementation of Deep-Forecast using PyTorch Deep Forecast: Deep Learning-based Spatio-Temporal Forecasting Adapted from original implementation Setu

Onur Kaplan 151 Dec 14, 2022
Manim is an engine for precise programmatic animations, designed for creating explanatory math videos

Manim is an engine for precise programmatic animations, designed for creating explanatory math videos. Note, there are two versions of manim. This rep

Grant Sanderson 49k Jan 09, 2023
ColossalAI-Examples - Examples of training models with hybrid parallelism using ColossalAI

ColossalAI-Examples This repository contains examples of training models with Co

HPC-AI Tech 185 Jan 09, 2023
An open-source, low-cost, image-based weed detection device for fallow scenarios.

Welcome to the OpenWeedLocator (OWL) project, an opensource hardware and software green-on-brown weed detector that uses entirely off-the-shelf compon

Guy Coleman 145 Jan 05, 2023
Geometric Deep Learning Extension Library for PyTorch

Documentation | Paper | Colab Notebooks | External Resources | OGB Examples PyTorch Geometric (PyG) is a geometric deep learning extension library for

Matthias Fey 16.5k Jan 08, 2023
The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution.

WSRGlow The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution. Audio sa

Kexun Zhang 96 Jan 03, 2023
PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines. We've created a system in which you can easily select and

Medical Machine Learning Lab - University of Münster 57 Nov 12, 2022
Source code for the paper: Variance-Aware Machine Translation Test Sets (NeurIPS 2021 Datasets and Benchmarks Track)

Variance-Aware-MT-Test-Sets Variance-Aware Machine Translation Test Sets License See LICENSE. We follow the data licensing plan as the same as the WMT

NLP2CT Lab, University of Macau 5 Dec 21, 2021
A library for graph deep learning research

Documentation | Paper [JMLR] | Tutorials | Benchmarks | Examples DIG: Dive into Graphs is a turnkey library for graph deep learning research. Why DIG?

DIVE Lab, Texas A&M University 1.3k Jan 01, 2023
SysWhispers Shellcode Loader

Shhhloader Shhhloader is a SysWhispers Shellcode Loader that is currently a Work in Progress. It takes raw shellcode as input and compiles a C++ stub

icyguider 630 Jan 03, 2023