Direct LiDAR Odometry: Fast Localization with Dense Point Clouds

Overview

Direct LiDAR Odometry: Fast Localization with Dense Point Clouds

DLO is a lightweight and computationally-efficient frontend LiDAR odometry solution with consistent and accurate localization. It features several algorithmic innovations that increase speed, accuracy, and robustness of pose estimation in perceptually-challenging environments and has been extensively tested on aerial and legged robots.

This work was part of NASA JPL Team CoSTAR's research and development efforts for the DARPA Subterranean Challenge, in which DLO was the primary state estimation component for our fleet of autonomous aerial vehicles.


drawing drawing

drawing

Instructions

DLO requires an input point cloud of type sensor_msgs::PointCloud2 with an optional IMU input of type sensor_msgs::Imu. Note that although IMU data is not required, it can be used for initial gravity alignment and will help with point cloud registration.

Dependencies

Our system has been tested extensively on both Ubuntu 18.04 Bionic with ROS Melodic and Ubuntu 20.04 Focal with ROS Noetic, although other versions may work. The following configuration with required dependencies has been verified to be compatible:

  • Ubuntu 18.04 or 20.04
  • ROS Melodic or Noetic (roscpp, std_msgs, sensor_msgs, geometry_msgs, pcl_ros)
  • C++ 14
  • CMake >= 3.16.3
  • OpenMP >= 4.5
  • Point Cloud Library >= 1.10.0
  • Eigen >= 3.3.7

Installing the binaries from Aptitude should work though:

sudo apt install libomp-dev libpcl-dev libeigen3-dev 

Compiling

Create a catkin workspace, clone the direct_lidar_odometry repository into the src folder, and compile via the catkin_tools package (or catkin_make if preferred):

mkdir ws && cd ws && mkdir src && catkin init && cd src
git clone https://github.com/vectr-ucla/direct_lidar_odometry.git
catkin build

Execution

After sourcing the workspace, launch the DLO odometry and mapping ROS nodes via:

roslaunch direct_lidar_odometry dlo.launch \
  pointcloud_topic:=/robot/velodyne_points \
  imu_topic:=/robot/vn100/imu

Make sure to edit the pointcloud_topic and imu_topic input arguments with your specific topics. If IMU is not being used, set the dlo/imu ROS param to false in cfg/dlo.yaml. However, if IMU data is available, please allow DLO to calibrate and gravity align for three seconds before moving. Note that the current implementation assumes that LiDAR and IMU coordinate frames coincide, so please make sure that the sensors are physically mounted near each other.

Test Data

For your convenience, we provide example test data here (9 minutes, ~4.2GB). To run, first launch DLO (with default point cloud and IMU topics) via:

roslaunch direct_lidar_odometry dlo.launch

In a separate terminal session, play back the downloaded bag:

rosbag play dlo_test.bag

drawing

Citation

If you found this work useful, please cite our manuscript:

@article{chen2021direct,
  title={Direct LiDAR Odometry: Fast Localization with Dense Point Clouds},
  author={Chen, Kenny and Lopez, Brett T and Agha-mohammadi, Ali-akbar and Mehta, Ankur},
  journal={arXiv preprint arXiv:2110.00605},
  year={2021}
}

Acknowledgements

We thank the authors of the FastGICP and NanoFLANN open-source packages:

  • Kenji Koide, Masashi Yokozuka, Shuji Oishi, and Atsuhiko Banno, “Voxelized GICP for Fast and Accurate 3D Point Cloud Registration,” in IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, pp. 11 054–11 059.
  • Jose Luis Blanco and Pranjal Kumar Rai, “NanoFLANN: a C++ Header-Only Fork of FLANN, A Library for Nearest Neighbor (NN) with KD-Trees,” https://github.com/jlblancoc/nanoflann, 2014.

License

This work is licensed under the terms of the MIT license.


drawing drawing

drawing drawing

Comments
  • Coordinate system conversion

    Coordinate system conversion

    Hi, thanks for your great work. I used my lidar and imu to record the data set and run it in DLO. The following figure is the map generated by the algorithm. I found that IMU drift is very serious. Is this related to coordinate system transformation? How can I set up and configure files to improve this situation? d0724d53975aa58f749b1abfa692456 3ae5711ad4dc74f03c683af98bc7db9

    opened by HomieRegina 15
  • HI, I use the nanoicp to location, but it cann`t work

    HI, I use the nanoicp to location, but it cann`t work

    Thanks for your work, I want to use nanoicp to location, it cann`t work right, but I use the pcl Gicp, it can work, so, there is anything i need to do?

    opened by tust13018211 10
  • Map generation

    Map generation

    Hi, How is the map generated? What algorithm is used? Can we upload a pcd map of the environment and then apply dlo just for localisation? I assumed dlo as a localisation algorithm, but it seems to take in the point cloud data from the bag file and generate a map on its own.

    opened by Srichitra-S 9
  • gicp speed

    gicp speed

    I run source code each scan's gicp process just cost 2-3 ms, when i run my own code use gicp library it cost 100-200ms. Both Input cloud's size almost same, which setting options may cause this phenomenon?

    opened by yst1 8
  • how to localize on a given pcd map?

    how to localize on a given pcd map?

    i have a trouble understanding how to localise on a given 3d pcd map, can anyone explain the steps

    i have a pcd map which i can load into rviz, and i have lidar-points and imu data, then how to use this package for localization?

    opened by srinivasrama 6
  • jacobian

    jacobian

    Hello author, I want to derive the jacobian in the code. I see we use global perturbation to update. When i try this, i meet this problem to look for your help. image When using Woodbury matrix identity it seems to make the formula more complicated, so i'm stuck to look for your help. Thanks for your help in advance!

    Best regards Xiaoliang jiao

    opened by narutojxl 6
  • the transformation between your lidar and imu?

    the transformation between your lidar and imu?

    I use a vlp16 lidar and a microstran imu(3DM-GX5-25) to run your program. But I failed. I think my transformation between imu and lidar is different from yours. So could you tell your true transformation between imu and lidar. the following pitcure is my configuration: 453670779 Thank you very much!

    opened by nonlinear1 6
  • Conversion of lidar and IMU coordinate system

    Conversion of lidar and IMU coordinate system

    Hi! I use a vlp16 lidar and a microstran imu(3DM-GX5-25) to record bags. And here is the part of the map I built with your algorithm. Because the coordinate systems of IMU and lidar are not consistent in the physical direction, this has affected the construction of the two-layer environment. Could you please tell me how to convert IMU and lidar coordinate system? 图片

    opened by HomieRegina 4
  • some trivial questions

    some trivial questions

    Hello authors, I have some trivial questions to look for your help. Thanks for your help.

    opened by narutojxl 4
  • Coordinate Frame

    Coordinate Frame

    Hi,

    Thank you for sharing this great work.

    I've set of problem such as defining the coordinate system of the odometry . Which coordinate frame do you use for odometry NED or BODY. I thought you use NED coordinate frame so that I expected when I move Lidar + IMU to the North direction, x should be positive increased but it has not been stable behaviour. I also moved the system to the East direction it should be caused positive increase on y axis but same problem. I think you use BODY frame or something different coordinate frame for odometry.

    I wonder also how do you send odometry information to the autopilot of drone. I prefered mavros and selected the related odometry ros topic and send them without change so I saw the odometry message in the autopilot but when odometry receive to the autopilot of drone it looks like x and y exchanged in autopilot even they are not exchanged odometry output of direct lidar odometry.

    Summary of the Question is that which coordinate frame do you use for odometry ? How can I set up the odometry frame to NED? x and y are negative decreased when I move lidar+imu to the North and the East respectively. and Z looks like 180 reversed. Should I multiply x and y with minus or apply rotation matrix to fix negative decreasing x, y odometry ?

    How can I ensure that autopilot and your odometry coordinate plane are in the same coordinate plane ?

    opened by danieldive 3
  • How to correct point cloud caused by motion?

    How to correct point cloud caused by motion?

    Hi, thank you for your great work about DLO. I have a question. When the robot is moving fast, the lidar point cloud will generate distortion due to movement, which will cause negative impact on the construction of the map. So how did DLO deal with this problem?

    opened by JACKLiuDay 2
Releases(v1.4.2)
Owner
VECTR at UCLA
Verifiable & Control-Theoretic Robotics Laboratory
VECTR at UCLA
PyTorch implementation for "Mining Latent Structures with Contrastive Modality Fusion for Multimedia Recommendation"

MIRCO PyTorch implementation for paper: Latent Structures Mining with Contrastive Modality Fusion for Multimedia Recommendation Dependencies Python 3.

Big Data and Multi-modal Computing Group, CRIPAC 9 Dec 08, 2022
Non-stationary GP package written from scratch in PyTorch

NSGP-Torch Examples gpytorch model with skgpytorch # Import packages import torch from regdata import NonStat2D from gpytorch.kernels import RBFKernel

Zeel B Patel 1 Mar 06, 2022
Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch

Enformer - Pytorch (wip) Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch. The original tensorflow

Phil Wang 235 Dec 27, 2022
Official code of Team Yao at Multi-Modal-Fact-Verification-2022

Official code of Team Yao at Multi-Modal-Fact-Verification-2022 A Multi-Modal Fact Verification dataset released as part of the De-Factify workshop in

Wei-Yao Wang 11 Nov 15, 2022
Course about deep learning for computer vision and graphics co-developed by YSDA and Skoltech.

Deep Vision and Graphics This repo supplements course "Deep Vision and Graphics" taught at YSDA @fall'21. The course is the successor of "Deep Learnin

Yandex School of Data Analysis 160 Jan 02, 2023
This repository contains the code for the binaural-detection model used in the publication arXiv:2111.04637

This repository contains the code for the binaural-detection model used in the publication arXiv:2111.04637 Dependencies The model depends on the foll

Jörg Encke 2 Oct 14, 2022
FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

Google 208 Dec 14, 2022
Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch

Retrieval-Augmented Denoising Diffusion Probabilistic Models (wip) Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in P

Phil Wang 55 Jan 01, 2023
Face and other object detection using OpenCV and ML Yolo

Object-and-Face-Detection-Using-Yolo- Opencv and YOLO object and face detection is implemented. You only look once (YOLO) is a state-of-the-art, real-

Happy N. Monday 3 Feb 15, 2022
A list of multi-task learning papers and projects.

This page contains a list of papers on multi-task learning for computer vision. Please create a pull request if you wish to add anything. If you are interested, consider reading our recent survey pap

svandenh 297 Dec 17, 2022
A comprehensive list of published machine learning applications to cosmology

ml-in-cosmology This github attempts to maintain a comprehensive list of published machine learning applications to cosmology, organized by subject ma

George Stein 290 Dec 29, 2022
python 93% acc. CNN Dogs Vs Cats ( Pytorch )

English | 简体中文(测试中...敬请期待) Cnn-Classification-Dog-Vs-Cat 猫狗辨别 (pytorch版本) CNN Resnet18 的猫狗分类器,基于ResNet及其变体网路系列,对于一般的图像识别任务表现优异,模型精准度高达93%(小型样本)。 项目制作于

apple ye 1 May 22, 2022
TLXZoo - Pre-trained models based on TensorLayerX

Pre-trained models based on TensorLayerX. TensorLayerX is a multi-backend AI fra

TensorLayer Community 13 Dec 07, 2022
A really easy-to-use and powerful sudoku solver.

SodukuSolver This is a really useful sudoku solver with a Qt gui. USAGE Enter the numbers in and click "RUN"! If you don't want to wait, simply press

Ujhhgtg Teams 11 Jun 02, 2022
SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals

SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals Abstract Sleep apnea (SA) is a common slee

9 Dec 21, 2022
Dilated RNNs in pytorch

PyTorch Dilated Recurrent Neural Networks PyTorch implementation of Dilated Recurrent Neural Networks (DilatedRNN). Getting Started Installation: $ pi

Zalando Research 200 Nov 17, 2022
training script for space time memory network

Trainig Script for Space Time Memory Network This codebase implemented training code for Space Time Memory Network with some cyclic features. Requirem

Yuxi Li 100 Dec 20, 2022
MonoScene: Monocular 3D Semantic Scene Completion

MonoScene: Monocular 3D Semantic Scene Completion MonoScene: Monocular 3D Semantic Scene Completion] [arXiv + supp] | [Project page] Anh-Quan Cao, Rao

298 Jan 08, 2023
ElasticFace: Elastic Margin Loss for Deep Face Recognition

This is the official repository of the paper: ElasticFace: Elastic Margin Loss for Deep Face Recognition Paper on arxiv: arxiv Model Log file Pretrain

Fadi Boutros 113 Dec 14, 2022
The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

ycj_project 1 Jan 18, 2022