Code for "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection", ICRA 2021

Related tags

Deep LearningFGR
Overview

FGR

This repository contains the python implementation for paper "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection"(ICRA 2021)[arXiv]

Installation

Prerequisites

  • Python 3.6
  • scikit-learn, opencv-python, numpy, easydict, pyyaml
conda create -n FGR python=3.6
conda activate FGR
pip install -r requirements.txt

Usage

Data Preparation

Please download the KITTI 3D object detection dataset from here and organize them as follows:

${Root Path To Your KITTI Dataset}
├── data_object_image_2
│   ├── training
│   │   └── image_2
│   └── testing (optional)
│       └── image_2
│
├── data_object_label_2
│   └── training
│       └── label_2
│
├── data_object_calib
│   ├── training
│   │   └── calib
│   └── testing (optional)
│       └── calib
│
└── data_object_velodyne
    ├── training
    │   └── velodyne
    └── testing (optional)
        └── velodyne

Retrieving psuedo labels

Stage I: Coarse 3D Segmentation

In this stage, we get coarse 3D segmentation mask for each car. Please run the following command:

cd FGR
python save_region_grow_result.py --kitti_dataset_dir ${Path To Your KITTI Dataset} --output_dir ${Path To Save Region-Growth Result}
  • This Python file uses multiprocessing.Pool, which requires the number of parallel processes to execute. Default process is 8, so change this number by adding extra parameter "--process ${Process Number You Want}" in above command if needed.
  • The space of region-growth result takes about 170M, and the execution time is about 3 hours when using process=8 (default)

Stage II: 3D Bounding Box Estimation

In this stage, psuedo labels with KITTI format will be calculated and stored. Please run the following command:

cd FGR
python detect.py --kitti_dataset_dir ${Path To Your KITTI Dataset} --final_save_dir ${Path To Save Psuedo Labels} --pickle_save_path ${Path To Save Region-Growth Result}
  • The multiprocessing.Pool is also used, with default process 16. Change it by adding extra parameter "--process ${Process Number}" in above command if needed.
  • Add "--not_merge_valid_labels" to ignore validation labels. We only create psuedo labels in training dataset, for further testing deep models, we simply copy groundtruth validation labels to saved path. If you just want to preserve training psuedo, please add this parameter
  • Add "--save_det_image" if you want to visualize the estimated bbox (BEV). The visualization results will be saved in "final_save_dir/image".
  • One visualization sample is drawn in different colors:
    • white points indicate the coarse 3D segmentation of the car
    • cyan lines indicate left/right side of frustum
    • green point indicates the key vertex
    • yellow lines indicate GT bbox's 2D projection
    • purple box indicates initial estimated bounding box
    • red box indicates the intersection based on purple box, which is also the 2D projection of final estimated 3D bbox

We also provide final pusedo training labels and GT validation labels in ./FGR/detection_result.zip. You can directly use them to train the model.

Use psuedo labels to train 3D detectors

1. Getting Startted

Please refer to the OpenPCDet repo here and complete all the required installation.

After downloading the repo and completing all the installation, a small modification of original code is needed:

--------------------------------------------------
pcdet.datasets.kitti.kitti_dataset:
1. line between 142 and 143, add: "if len(obj_list) == 0: return None"
2. line after 191, delete "return list(infos)", and add:

final_result = list(infos)
while None in final_result:
    final_result.remove(None)
            
return final_result
--------------------------------------------------

This is because when creating dataset, OpenPCDet (the repo) requires each label file to have at least one valid label. In our psuedo labels, however, some bad labels will be removed and the label file may be empty.

2. Data Preparation

In this repo, the KITTI dataset storage is as follows:

data/kitti
├── testing
│   ├── calib
│   ├── image_2
│   └── velodyne
└── training
    ├── calib
    ├── image_2
    ├── label_2
    └── velodyne

It's different from our dataset storage, so we provide a script to construct this structure based on symlink:

sh create_kitti_dataset_new_format.sh ${Path To KITTI Dataset} ${Path To OpenPCDet Directory}

3. Start training

Please remove the symlink of 'training/label_2' temporarily, and add a new symlink to psuedo label path. Then follow the OpenPCDet instructions and train PointRCNN models.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{wei2021fgr,
  title={{FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection}},
  author={Wei, Yi and Su, Shang and Lu, Jiwen and Zhou, Jie},
  booktitle={ICRA},
  year={2021}
}
Owner
Yi Wei
Yi Wei
Using Hotel Data to predict High Value And Potential VIP Guests

Description Using hotel data and AI to predict high value guests and potential VIP guests. Hotel can leverage on prediction resutls to run more effect

HCG 12 Feb 14, 2022
Fine-tuning StyleGAN2 for Cartoon Face Generation

Cartoon-StyleGAN 🙃 : Fine-tuning StyleGAN2 for Cartoon Face Generation Abstract Recent studies have shown remarkable success in the unsupervised imag

Jihye Back 520 Jan 04, 2023
TensorFlow 2 AI/ML library wrapper for openFrameworks

ofxTensorFlow2 This is an openFrameworks addon for the TensorFlow 2 ML (Machine Learning) library

Center for Art and Media Karlsruhe 96 Dec 31, 2022
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Cheng Zhang 149 Jan 08, 2023
A package related to building quasi-fibration symmetries

qf A package related to building quasi-fibration symmetries. If you'd like to learn more about how it works, see the brief explanation and References

Paolo Boldi 1 Dec 01, 2021
Federated_learning codes used for the the paper "Evaluation of Federated Learning Aggregation Algorithms" and "A Federated Learning Aggregation Algorithm for Pervasive Computing: Evaluation and Comparison"

Federated Distance (FedDist) This is the code accompanying the Percom2021 paper "A Federated Learning Aggregation Algorithm for Pervasive Computing: E

GETALP 8 Jan 03, 2023
PyTorch implementation of the wavelet analysis from Torrence & Compo

Continuous Wavelet Transforms in PyTorch This is a PyTorch implementation for the wavelet analysis outlined in Torrence and Compo (BAMS, 1998). The co

Tom Runia 262 Dec 21, 2022
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

75 Nov 24, 2022
Python implementation of Lightning-rod Agent, the Stack4Things board-side probe

Iotronic Lightning-rod Agent Python implementation of Lightning-rod Agent, the Stack4Things board-side probe. Free software: Apache 2.0 license Websit

2 May 19, 2022
Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering

Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering Modular Primitives for High-Performance Differentiable Rendering Samuli

NVIDIA Research Projects 675 Jan 06, 2023
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs

Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs ArXiv Abstract Convolutional Neural Networks (CNNs) have become the de f

Philipp Benz 12 Oct 24, 2022
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Tengfei Wang 110 Dec 20, 2022
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
Rohit Ingole 2 Mar 24, 2022
Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

UnRigidFlow This is the official PyTorch implementation of UnRigidFlow (IJCAI2019). Here are two sample results (~10MB gif for each) of our unsupervis

Liang Liu 28 Nov 16, 2022
Angular & Electron desktop UI framework. Angular components for native looking and behaving macOS desktop UI (Electron/Web)

Angular Desktop UI This is a collection for native desktop like user interface components in Angular, especially useful for Electron apps. It starts w

Marc J. Schmidt 49 Dec 22, 2022
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

WenxueCui 7 Nov 07, 2022
Creating Multi Task Models With Keras

Creating Multi Task Models With Keras About The Project! I used the keras and Tensorflow Library, To build a Deep Learning Neural Network to Creating

Srajan Chourasia 4 Nov 28, 2022
SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch.

The SpeechBrain Toolkit SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch. The goal is to create a single, flexible, and us

SpeechBrain 5.1k Jan 02, 2023
Imaginaire - NVIDIA's Deep Imagination Team's PyTorch Library

Imaginaire Docs | License | Installation | Model Zoo Imaginaire is a pytorch library that contains optimized implementation of several image and video

NVIDIA Research Projects 3.6k Dec 29, 2022