For visualizing the dair-v2x-i dataset

Overview

3D Detection & Tracking Viewer

The project is based on hailanyi/3D-Detection-Tracking-Viewer and is modified, you can find the original version of the code below: https://github.com/hailanyi/3D-Detection-Tracking-Viewer

This project was developed for viewing 3D object detection results from the Dair-V2X-I datasets.

It supports rendering 3D bounding boxes and rendering boxes on images.

Features

  • Captioning box ids(infos) in 3D scene
  • Projecting 3D box or points on 2D image

Design pattern

This code includes two parts, one for convert tools, other one for visualization of 3D detection results.

Change log

  • (2022.02.01) Adapted to the Dair-V2X-I dataset

Prepare data

  • Dair-V2X-I detection dataset
  • Convert the Dair-V2X-I dataset to kitti format using the conversion tool

Requirements (Updated 2021.11.2)

python==3.7.11
numpy==1.21.4
vedo==2022.0.1
vtk==8.1.2
opencv-python==4.1.1.26
matplotlib==3.4.3
open3d==0.14.1

It is recommended to use anaconda to create the visualization environment

conda create -n dair_vis python=3.8

To activate this environment, use

conda activate dair_vis

Install the requirements

pip install -r requirements.txt

To deactivate an active environment, use

conda deactivate

Convert tools

  • Prepare a dataset of the following structure:
  • "kitti_format" must be an empty folder to store the conversion result
  • "source_format" to store the source Dair-V2X-I datasets.
# For Dair-V2X-I Dataset  
dair_v2x_i
├── kitti_format
├── source_format
│   ├── single-infrastructure-side
│   │   ├── calib
│   │   │   ├── camera_intrinsic
│   │   │   └── virtuallidar_to_camera
│   │   └── label
│   │       ├── camera
│   │       └── virtuallidar
│   ├── single-infrastructure-side-example
│   │   ├── calib
│   │   │   ├── camera_intrinsic
│   │   │   └── virtuallidar_to_camera
│   │   ├── image
│   │   ├── label
│   │   │   ├── camera
│   │   │   └── virtuallidar
│   │   └── velodyne
│   ├── single-infrastructure-side-image
│   └── single-infrastructure-side-velodyne

  • If you have the same folder structure, you only need change the "root path" to your local path from config/config.yaml
  • Running the jupyter notebook server and open the "convert.ipynb"
  • The code is very simple , so there are no input parameters for advanced customization, you need to comment or copy the code to implemented separately following functions : -Convert calib files to KITTI format -Convert camera-based label files to KITTI format -Convert lidar-based label files to KITTI format -Convert image folders to KITTI format -Convert velodyne folders to KITTI format

After the convet you will get the following result. the

dair_v2x_i
├── kitti_format
│   ├── calib
│   ├── image_2
│   ├── label_2
│   ├── label_velodyne
│   └── velodyne
 
  • The label_2 base the camera label, and use the lidar label information replace the size information(w,h,l). In the camera view looks like better.
  • The label_velodyne base the velodyne label.
  • P2 represents the camera internal reference, which is a 3×3 matrix, not the same as KITTI. It convert frome the "cam_K" of the json file.
  • Tr_velo_to_cam: represents the camera to lidar transformation matrix, as a 3×4 matrix.

Usage

1. Set the path to the dataset folder used for input to the visualizer

If you have completed the conversion operation, the path should have been set correctly. Otherwise you need to set "root_path" in the config/config.yaml to the correct path

2. Choose whether camera or lidar based tagging for visualization

You need to set the "label_select" parameter in config.yaml to "cam" or "vel", to specify the label frome label_2 or velodyne_label.

2. Run and Terminate

  • You can start the program with the following command
python dair_3D_detection_viewer.py
  • Pressing space in the lidar window will display the next frame
  • Terminating the program is more complicated, you cannot terminate the program at static image status. You need to press the space quickly to make the frames play continuously, and when it becomes obvious that the system is overloaded with resources and the program can't respond, press Ctrl-C in the terminal window to terminate it. Try a few more times and you will eventually get the hang of it.

Notes on the Dair-V2X-I dataset

  • In the calib file of this dataset, "cam_K" is the real intrinsic matrix parameter of the camera, not "P". Although they are very close in value and structure.
  • There are multiple camera images with different focal and perspectives in this dataset, and the camera intrinsic matrix reference will change with each image file. Therefore, when using this dataset, please make sure that the calib file you are using corresponds to the image file (e.g. do not use only the 000000.txt parameter for all image files)
  • The sequence of files in this dataset is non-contiguous (e.g. missing the 000023), do not only use 00000 to lens(dataset) to get the sequence of file names directly.
  • The dataset provides optimized labels for both lidar and camera, and after testing, there are errors in the projection of the lidar label on camera (but the projection matrix is correct, only the label itself has issues). Likewise, there is a disadvantage of using the camera's label in lidar. Therefore it is recommended to use the corresponding label for lidar, and use the fused label for the camera.
  • There are some other objects in the label, for example you can see some trafficcone.
Reference PyTorch implementation of "End-to-end optimized image compression with competition of prior distributions"

PyTorch reference implementation of "End-to-end optimized image compression with competition of prior distributions" by Benoit Brummer and Christophe

Benoit Brummer 6 Jun 16, 2022
A Dying Light 2 (DL2) PAKFile Utility for Modders and Mod Makers.

Dying Light 2 PAKFile Utility A Dying Light 2 (DL2) PAKFile Utility for Modders and Mod Makers. This tool aims to make PAKFile (.pak files) modding a

RHQ Online 12 Aug 26, 2022
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

Google 203 Jan 05, 2023
Leaf: Multiple-Choice Question Generation

Leaf: Multiple-Choice Question Generation Easy to use and understand multiple-choice question generation algorithm using T5 Transformers. The applicat

Kristiyan Vachev 62 Dec 20, 2022
PyTorch code accompanying the paper "Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning" (NeurIPS 2021).

HIGL This is a PyTorch implementation for our paper: Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning (NeurIPS 2021). Our cod

Junsu Kim 20 Dec 14, 2022
Use your Philips Hue lights as Racing Flags. Works with Assetto Corsa, Assetto Corsa Competizione and iRacing.

phue-racing-flags Use your Philips Hue lights as Racing Flags. Explore the docs » Report Bug · Request Feature Table of Contents About The Project Bui

50 Sep 03, 2022
Repo for FUZE project. I will also publish some Linux kernel LPE exploits for various real world kernel vulnerabilities here. the samples are uploaded for education purposes for red and blue teams.

Linux_kernel_exploits Some Linux kernel exploits for various real world kernel vulnerabilities here. More exploits are yet to come. This repo contains

Wei Wu 472 Dec 21, 2022
An open source implementation of CLIP.

OpenCLIP Welcome to an open source implementation of OpenAI's CLIP (Contrastive Language-Image Pre-training). The goal of this repository is to enable

2.7k Dec 31, 2022
MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical Images (ISBI 2021, MELBA 2021)

MultiMix This repository contains the implementation of MultiMix. Our publications for this project are listed below: "MultiMix: Sparingly Supervised,

Ayaan Haque 27 Dec 22, 2022
Image-generation-baseline - MUGE Text To Image Generation Baseline

MUGE Text To Image Generation Baseline Requirements and Installation More detail

23 Oct 17, 2022
Source code for CVPR2022 paper "Abandoning the Bayer-Filter to See in the Dark"

Abandoning the Bayer-Filter to See in the Dark (CVPR 2022) Paper: https://arxiv.org/abs/2203.04042 (Arxiv version) This code includes the training and

74 Dec 15, 2022
Tool for installing and updating MiSTer cores and other files

MiSTer Downloader This tool installs and updates all the cores and other extra files for your MiSTer. It also updates the menu core, the MiSTer firmwa

72 Dec 24, 2022
Convolutional neural network that analyzes self-generated images in a variety of languages to find etymological similarities

This project is a convolutional neural network (CNN) that analyzes self-generated images in a variety of languages to find etymological similarities. Specifically, the goal is to prove that computer

1 Feb 03, 2022
Official Implementation of HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation

HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation by Lukas Hoyer, Dengxin Dai, and Luc Van Gool [Arxiv] [Paper] Overview Unsup

Lukas Hoyer 149 Dec 28, 2022
ICML 21 - Voice2Series: Reprogramming Acoustic Models for Time Series Classification

Voice2Series-Reprogramming Voice2Series: Reprogramming Acoustic Models for Time Series Classification International Conference on Machine Learning (IC

49 Jan 03, 2023
Pytorch implementation for "Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter".

Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter This is a pytorch-based implementation for paper Implicit Feature Alignme

wangtianwei 61 Nov 12, 2022
This project aims to be a handler for input creation and running of multiple RICEWQ simulations.

What is autoRICEWQ? This project aims to be a handler for input creation and running of multiple RICEWQ simulations. What is RICEWQ? From the descript

Yass Fuentes 1 Feb 01, 2022
This is a work in progress reimplementation of Instant Neural Graphics Primitives

Neural Hash Encoding This is a work in progress reimplementation of Instant Neural Graphics Primitives Currently this can train an implicit representa

Penn 79 Sep 01, 2022
Distance-Ratio-Based Formulation for Metric Learning

Distance-Ratio-Based Formulation for Metric Learning Environment Python3 Pytorch (http://pytorch.org/) (version 1.6.0+cu101) json tqdm Preparing datas

Hyeongji Kim 1 Dec 07, 2022
VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations

VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations 3D-aware Image Synthesis via Learning Structural and Textura

GenForce: May Generative Force Be with You 116 Dec 26, 2022