Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems

Overview

ACSC

Automatic extrinsic calibration for non-repetitive scanning solid-state LiDAR and camera systems.

pipeline

System Architecture

pipeline

1. Dependency

Tested with Ubuntu 16.04 64-bit and Ubuntu 18.04 64-bit.

  • ROS (tested with kinetic / melodic)

  • Eigen 3.2.5

  • PCL 1.8

  • python 2.X / 3.X

  • python-pcl

  • opencv-python (>= 4.0)

  • scipy

  • scikit-learn

  • transforms3d

  • pyyaml

  • mayavi (optional, for debug and visualization only)

2. Preparation

2.1 Download and installation

Use the following commands to download this repo.

Notice: the SUBMODULE should also be cloned.

git clone --recurse-submodules https://github.com/HViktorTsoi/ACSC

Compile and install the normal-diff segmentation extension.

cd /path/to/your/ACSC/segmentation

python setup.py install

We developed a practical ROS tool to achieve convenient calibration data collection, which can automatically organize the data into the format in 3.1. We strongly recommend that you use this tool to simplify the calibration process.

It's ok if you don't have ROS or don't use the provided tool, just manually process the images and point clouds into 3.1's format.

First enter the directory of the collection tool and run the following command:

cd /path/to/your/ACSC/ros/livox_calibration_ws

catkin_make

source ./devel/setup.zsh # or source ./devel/setup.sh

File explanation

  • ros/: The data collection tool directory (A ros workspace);

  • configs/: The directory used to store configuration files;

  • calibration.py: The main code for solving extrinsic parameters;

  • projection_validation.py: The code for visualization and verification of calibration results;

  • utils.py: utilities.

2.2 Preparing the calibration board

chessboard

We use a common checkerboard as the calibration target.

Notice, to ensure the success rate of calibration, it is best to meet the following requirement, when making and placing the calibration board:

  1. The size of the black/white square in the checkerboard should be >= 8cm;

  2. The checkerboard should be printed out on white paper, and pasted on a rectangular surface that will not deform;

  3. There should be no extra borders around the checkerboard;

  4. The checkerboard should be placed on a thin monopod, or suspended in the air with a thin wire. And during the calibration process, the support should be as stable as possible (Due to the need for point cloud integration);

  5. When placing the checkerboard on the base, the lower edge of the board should be parallel to the ground;

  6. There are not supposed to be obstructions within 3m of the radius of the calibration board.

Checkerboard placement

calibration board placement

Sensor setup

calibration board placement

3. Extrinsic Calibration

3.1 Data format

The images and LiDAR point clouds data need to be organized into the following format:

|- data_root
|-- images
|---- 000000.png
|---- 000001.png
|---- ......
|-- pcds
|---- 000000.npy
|---- 000001.npy
|---- ......
|-- distortion
|-- intrinsic

Among them, the images directory contains images containing checkerboard at different placements, recorded by the camera ;

The pcds directory contains point clouds corresponding to the images, each point cloud is a numpy array, with the shape of N x 4, and each row is the x, y, z and reflectance information of the point;

The distortion and intrinsic are the distortion parameters and intrinsic parameters of the camera respectively (will be described in detail in 3.3).

Sample Data

The sample solid state LiDAR point clouds, images and camera intrinsic data can be downloaded (375.6 MB) on:

Google Drive | BaiduPan (Code: fws7)

If you are testing with the provided sample data, you can directly jump to 3.4.

3.2 Data collection for your own sensors

First, make sure you can receive data topics from the the Livox LiDAR ( sensor_msgs.PointCloud2 ) and Camera ( sensor_msgs.Image );

Run the launch file of the data collection tool:

mkdir /tmp/data

cd /path/to/your/ACSC/ros/livox_calibration_ws
source ./devel/setup.zsh # or source ./devel/setup.sh

roslaunch calibration_data_collection lidar_camera_calibration.launch \                                                                                
config-path:=/home/hvt/Code/livox_camera_calibration/configs/data_collection.yaml \
image-topic:=/camera/image_raw \
lidar-topic:=/livox/lidar \
saving-path:=/tmp/data

Here, config-path is the path of the configuration file, usually we use configs/data_collection.yaml, and leave it as default;

The image-topic and lidar-topic are the topic names that we receive camera images and LiDAR point clouds, respectively;

The saving-path is the directory where the calibration data is temporarily stored.

After launching, you should be able to see the following two interfaces, which are the real-time camera image, and the birdeye projection of LiDAR.

If any of these two interfaces is not displayed properly, please check yourimage-topic and lidar-topic to see if the data can be received normally.

GUI

Place the checkerboard, observe the position of the checkerboard on the LiDAR birdeye view interface, to ensure that it is within the FOVof the LiDAR and the camera.

Then, press <Enter> to record the data; you need to wait for a few seconds, after the point cloud is collected and integrated, and the screen prompts that the data recording is complete, change the position of the checkerboard and continue to record the next set of data.

To ensure the robustness of the calibration results, the placement of the checkerboard should meet the following requirement:

  1. The checkerboard should be at least 2 meters away from the LiDAR;

  2. The checkerboard should be placed in at least 6 positions, which are the left, middle, and right sides of the short distance (about 4m), and the left, middle, and right sides of the long distance (8m);

  3. In each position, the calibration plate should have 2~3 different orientations.

When all calibration data is collected, type Ctrl+c in the terminal to close the calibration tool.

At this point, you should be able to see the newly generated data folder named with saving-path that we specified, where images are saved in images, and point clouds are saved in pcds:

collection_result

3.3 Camera intrinsic parameters

There are many tools for camera intrinsic calibration, here we recommend using the Camera Calibrator App in MATLAB, or the Camera Calibration Tools in ROS, to calibrate the camera intrinsic.

Write the camera intrinsic matrix

fx s x0
0 fy y0
0  0  1

into the intrinsic file under data-root. The format should be as shown below:

intrinsic

Write the camera distortion vector

k1  k2  p1  p2  k3

into the distortion file under data-root. The format should be as shown below:

dist

3.4 Extrinsic Calibration

When you have completed all the steps in 3.1 ~ 3.3, the data-root directory should contain the following content:

data

If any files are missing, please confirm whether all the steps in 3.1~3.3 are completed.

Modify the calibration configuration file in directory config, here we take sample.yaml as an example:

  1. Modify the root under data, to the root directory of data collected in 3.1~3.3. In our example, root should be /tmp/data/1595233229.25;

  2. Modify the chessboard parameter under data, change W and H to the number of inner corners of the checkerboard that you use (note that, it is not the number of squares, but the number of inner corners. For instance, for the checkerboard in 2.2, W= 7, H=5); Modify GRID_SIZE to the side length of a single little white / black square of the checkerboard (unit is m);

Then, run the extrinsic calibration code:

python calibration.py --config ./configs/sample.yaml

After calibration, the extrinsic parameter matrix will be written into the parameter/extrinsic file under data-root. data

4. Validation of result

After extrinsic calibration of step 3, run projection_projection.py to check whether the calibration is accurate:

python projection_validation.py --config ./configs/sample.yaml

It will display the point cloud reprojection to the image with solved extrinsic parameters, the RGB-colorized point cloud, and the visualization of the detected 3D corners reprojected to the image.

Note that, the 3D point cloud colorization results will only be displayed if mayavi is installed.

Reprojection of Livox Horizon Point Cloud

data

Reprojection Result of Livox Mid100 Point Cloud

data

Reprojection Result of Livox Mid40 Point Cloud

data

Colorized Point Cloud

data

Detected Corners

data data

Appendix

I. Tested sensor combinations

No. LiDAR Camera Chessboard Pattern
1 LIVOX Horizon MYNTEYE-D 120 7x5, 0.08m
2 LIVOX Horizon MYNTEYE-D 120 7x5, 0.15m
3 LIVOX Horizon AVT Mako G-158C 7x5, 0.08m
4 LIVOX Horizon Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
5 LIVOX Mid-40 MYNTEYE-D 120 7x5, 0.08m
6 LIVOX Mid-40 MYNTEYE-D 120 7x5, 0.15m
7 LIVOX Mid-40 AVT Mako G-158C 7x5, 0.08m
8 LIVOX Mid-40 Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
9 LIVOX Mid-100 MYNTEYE-D 120 7x5, 0.08m
10 LIVOX Mid-100 MYNTEYE-D 120 7x5, 0.15m
11 LIVOX Mid-100 AVT Mako G-158C 7x5, 0.08m
12 LIVOX Mid-100 Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
13 RoboSense ruby MYNTEYE-D 120 7x5, 0.08m
14 RoboSense ruby AVT Mako G-158C 7x5, 0.08m
15 RoboSense ruby Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
16 RoboSense RS32 MYNTEYE-D 120 7x5, 0.08m
17 RoboSense RS32 AVT Mako G-158C 7x5, 0.08m
18 RoboSense RS32 Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m

II. Paper

ACSC: Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems

@misc{cui2020acsc,
      title={ACSC: Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems}, 
      author={Jiahe Cui and Jianwei Niu and Zhenchao Ouyang and Yunxiang He and Dian Liu},
      year={2020},
      eprint={2011.08516},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

III. Known Issues

Updating...

Owner
KINO
Failed person.
KINO
YolactEdge: Real-time Instance Segmentation on the Edge

YolactEdge, the first competitive instance segmentation approach that runs on small edge devices at real-time speeds. Specifically, YolactEdge runs at up to 30.8 FPS on a Jetson AGX Xavier (and 172.7

Haotian Liu 1.1k Jan 06, 2023
An end-to-end regression problem of predicting the price of properties in Bangalore.

Bangalore-House-Price-Prediction An end-to-end regression problem of predicting the price of properties in Bangalore. Deployed in Heroku using Flask.

Shruti Balan 1 Nov 25, 2022
Consensus score for tripadvisor

ContripScore ContripScore is essentially a score that combines an Internet platform rating and a consensus rating from sentiment analysis (For instanc

Pepe 1 Jan 13, 2022
Anomaly Detection Based on Hierarchical Clustering of Mobile Robot Data

We proposed a new approach to detect anomalies of mobile robot data. We investigate each data seperately with two clustering method hierarchical and k-means. There are two sub-method that we used for

Zekeriyya Demirci 1 Jan 09, 2022
GULAG: GUessing LAnGuages with neural networks

GULAG: GUessing LAnGuages with neural networks Classify languages in text via neural networks. Привет! My name is Egor. Was für ein herrliches Frühl

Egor Spirin 12 Sep 02, 2022
《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)

A-CNN: Annularly Convolutional Neural Networks on Point Clouds Created by Artem Komarichev, Zichun Zhong, Jing Hua from Department of Computer Science

Artёm Komarichev 44 Feb 24, 2022
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

vid2vid Project | YouTube(short) | YouTube(full) | arXiv | Paper(full) Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic vid

NVIDIA Corporation 8.1k Jan 01, 2023
Simple reimplemetation experiments about FcaNet

FcaNet-CIFAR An implementation of the paper FcaNet: Frequency Channel Attention Networks on CIFAR10/CIFAR100 dataset. how to run Code: python Cifar.py

76 Feb 04, 2021
Official implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)

Rethinking Graph Neural Architecture Search from Message-passing Intro The GNAS can automatically learn better architecture with the optimal depth of

Shaofei Cai 48 Sep 30, 2022
fastgradio is a python library to quickly build and share gradio interfaces of your trained fastai models.

fastgradio is a python library to quickly build and share gradio interfaces of your trained fastai models.

Ali Abdalla 34 Jan 05, 2023
Build Graph Nets in Tensorflow

Graph Nets library Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet. Contact DeepMind 5.2k Jan 05, 2023

A note taker for NVDA. Allows the user to create, edit, view, manage and export notes to different formats.

Quick Notetaker add-on for NVDA The Quick Notetaker add-on is a wonderful tool which allows writing notes quickly and easily anytime and from any app

5 Dec 06, 2022
Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network."

R2RNet Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network." Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu

77 Dec 24, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Berkeley Vision and Learning Center 33k Dec 28, 2022
PyTorch implementation of Deformable Convolution

PyTorch implementation of Deformable Convolution !!!Warning: There is some issues in this implementation and this repo is not maintained any more, ple

Wei Ouyang 893 Dec 18, 2022
Pytorch implementation of Learning with Opponent-Learning Awareness

Pytorch implementation of Learning with Opponent-Learning Awareness using DiCE

Alexis David Jacq 82 Sep 15, 2022
Interactive Image Segmentation via Backpropagating Refinement Scheme

Won-Dong Jang and Chang-Su Kim, Interactive Image Segmentation via Backpropagating Refinement Scheme, CVPR 2019

Won-Dong Jang 85 Sep 15, 2022
MultiLexNorm 2021 competition system from ÚFAL

ÚFAL at MultiLexNorm 2021: Improving Multilingual Lexical Normalization by Fine-tuning ByT5 David Samuel & Milan Straka Charles University Faculty of

ÚFAL 13 Jun 28, 2022
Contenido del curso Bases de datos del DCC PUC versión 2021-2

IIC2413 - Bases de Datos Tabla de contenidos Equipo Profesores Ayudantes Contenidos Calendario Evaluaciones Resumen de notas Foro Política de integrid

54 Nov 23, 2022