[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.

Overview

OpenCOOD

Documentation Status License: MIT

OpenCOOD is an Open COOperative Detection framework for autonomous driving. It is also the official implementation of the ICRA 2022 paper OPV2V.

News

03/17/2022: V2VNet is supported and the results/trained model are provided in the benchmark table.

03/10/2022: Results and pretrained weights for Attentive Fusion with compression are provided.

02/20/2022: F-Cooper now is supported and the results/traiend model can be found in the benchmark table.

01/31/2022: Our paper OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication has been accpted by ICRA2022!

09/21/2021: OPV2V dataset is public available: https://mobility-lab.seas.ucla.edu/opv2v/

Features

  • Provide easy data API for the Vehicle-to-Vehicle (V2V) multi-modal perception dataset OPV2V

    It currently provides easy API to load LiDAR data from multiple agents simultaneously in a structured format and convert to PyTorch Tesnor directly for model use.

  • Provide multiple SOTA 3D detection backbone

    It supports state-of-the-art LiDAR detector including PointPillar, Pixor, VoxelNet, and SECOND.

  • Support most common fusion strategies

    It includes 3 most common fusion strategies: early fusion, late fusion, and intermediate fusion across different agents.

  • Support several SOTA multi-agent visual fusion model

    It supports the most recent multi-agent perception algorithms (currently up to Sep. 2021) including Attentive Fusion, Cooper (early fusion), F-Cooper, V2VNet etc. We will keep updating the newest algorithms.

  • Provide a convenient log replay toolbox for OPV2V dataset (coming soon)

    It also provides an easy tool to replay the original OPV2V dataset. More importantly, it allows users to enrich the original dataset by attaching new sensors or define additional tasks (e.g. tracking, prediction) without changing the events in the initial dataset (e.g. positions and number of all vehicles, traffic speed).

Data Downloading

All the data can be downloaded from google drive. If you have a good internet, you can directly download the complete large zip file such as train.zip. In case you suffer from downloading large fiels, we also split each data set into small chunks, which can be found in the directory ending with _chunks, such as train_chunks. After downloading, please run the following command to each set to merge those chunks together:

cat train.zip.parta* > train.zip
unzip train.zip

Installation

Please refer to data introduction and installation guide to prepare data and install OpenCOOD. To see more details of OPV2V data, please check our website.

Quick Start

Data sequence visualization

To quickly visualize the LiDAR stream in the OPV2V dataset, first modify the validate_dir in your opencood/hypes_yaml/visualization.yaml to the opv2v data path on your local machine, e.g. opv2v/validate, and the run the following commond:

cd ~/OpenCOOD
python opencood/visualization/vis_data_sequence.py [--color_mode ${COLOR_RENDERING_MODE}]

Arguments Explanation:

  • color_mode : str type, indicating the lidar color rendering mode. You can choose from 'constant', 'intensity' or 'z-value'.

Train your model

OpenCOOD uses yaml file to configure all the parameters for training. To train your own model from scratch or a continued checkpoint, run the following commonds:

python opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir  ${CHECKPOINT_FOLDER}]

Arguments Explanation:

  • hypes_yaml: the path of the training configuration file, e.g. opencood/hypes_yaml/second_early_fusion.yaml, meaning you want to train an early fusion model which utilizes SECOND as the backbone. See Tutorial 1: Config System to learn more about the rules of the yaml files.
  • model_dir (optional) : the path of the checkpoints. This is used to fine-tune the trained models. When the model_dir is given, the trainer will discard the hypes_yaml and load the config.yaml in the checkpoint folder.

Test the model

Before you run the following command, first make sure the validation_dir in config.yaml under your checkpoint folder refers to the testing dataset path, e.g. opv2v_data_dumping/test.

python opencood/tools/inference.py --model_dir ${CHECKPOINT_FOLDER} --fusion_method ${FUSION_STRATEGY} [--show_vis] [--show_sequence]

Arguments Explanation:

  • model_dir: the path to your saved model.
  • fusion_method: indicate the fusion strategy, currently support 'early', 'late', and 'intermediate'.
  • show_vis: whether to visualize the detection overlay with point cloud.
  • show_sequence : the detection results will visualized in a video stream. It can NOT be set with show_vis at the same time.

The evaluation results will be dumped in the model directory.

Benchmark and model zoo

Results on OPV2V dataset ([email protected] for no-compression/ compression)

Backbone Fusion Strategy Bandwidth (Megabit),
before/after compression
Default Towns Culver City Download
Naive Late PointPillar Late 0.024/0.024 0.781/0.781 0.668/0.668 url
Cooper PointPillar Early 7.68/7.68 0.800/x 0.696/x url
Attentive Fusion PointPillar Intermediate 126.8/1.98 0.815/0.810 0.735/0.731 url
F-Cooper PointPillar Intermediate 72.08/1.12 0.790/0.788 0.728/0.726 url
V2VNet PointPillar Intermediate 72.08/1.12 0.822/0.814 0.734/0.729 url
Naive Late VoxelNet Late 0.024/0.024 0.738/0.738 0.588/0.588 url
Cooper VoxelNet Early 7.68/7.68 0.758/x 0.677/x url
Attentive Fusion VoxelNet Intermediate 576.71/1.12 0.864/0.852 0.775/0.746 url
Naive Late SECOND Late 0.024/0.024 0.775/0.775 0.682/0.682 url
Cooper SECOND Early 7.68/7.68 0.813/x 0.738/x url
Attentive SECOND Intermediate 63.4/0.99 0.826/0.783 0.760/0.760 url
Naive Late PIXOR Late 0.024/0.024 0.578/0.578 0.360/0.360 url
Cooper PIXOR Early 7.68/7.68 0.678/x 0.558/x url
Attentive PIXOR Intermediate 313.75/1.22 0.687/0.612 0.546/0.492 url

Note:

  • We suggest using PointPillar as the backbone when you are creating your method and try to compare with our benchmark, as we implement most of the SOTA methods with this backbone only.
  • We assume the transimssion rate is 27Mbp/s. Considering the frequency of LiDAR is 10Hz, the bandwidth requirement should be less than 2.7Mbp to avoid severe delay.
  • A 'x' in the benchmark table represents the bandwidth requirement is too large, which can not be considered to employ in practice.

Tutorials

We have a series of tutorials to help you understand OpenCOOD more. Please check the series of our tutorials.

Citation

If you are using our OpenCOOD framework or OPV2V dataset for your research, please cite the following paper:

@inproceedings{xu2022opencood,
 author = {Runsheng Xu, Hao Xiang, Xin Xia, Xu Han, Jinlong Li, Jiaqi Ma},
 title = {OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication},
 booktitle = {2022 IEEE International Conference on Robotics and Automation (ICRA)},
 year = {2022}}

Also, under this LICENSE, OpenCOOD is for non-commercial research only. Researchers can modify the source code for their own research only. Contracted work that generates corporate revenues and other general commercial use are prohibited under this LICENSE. See the LICENSE file for details and possible opportunities for commercial use.

Future Plans

  • Provide camera APIs for OPV2V
  • Provide the log replay toolbox
  • Implement F-Cooper
  • Implement V2VNet
  • Implement DiscoNet

Contributors

OpenCOOD is supported by the UCLA Mobility Lab. We also appreciate the great work from OpenPCDet, as part of our works use their framework.

Lab Principal Investigator:

Project Lead:

Owner
Runsheng Xu
UCLA PHD candidate, Former Senior Machine Learning Engineer in Mercedes Benz R&D North America
Runsheng Xu
TorchIO is a Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Fernando Pérez-García 1.6k Jan 06, 2023
Official pytorch implementation of DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces

DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces Minhyuk Sung*, Zhenyu Jiang*, Panos Achlioptas, Niloy J. Mitra, Leonidas

Zhenyu Jiang 21 Aug 30, 2022
Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020)

U-GAT-IT — Official TensorFlow Implementation (ICLR 2020) : Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization fo

Junho Kim 6.2k Jan 04, 2023
[CIKM 2019] Code and dataset for "Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Prediction"

FiGNN for CTR prediction The code and data for our paper in CIKM2019: Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Predicti

Big Data and Multi-modal Computing Group, CRIPAC 75 Dec 30, 2022
QI-Q RoboMaster2022 CV Algorithm

QI-Q RoboMaster2022 CV Algorithm

2 Jan 10, 2022
Code repository for the paper: Hierarchical Kinematic Probability Distributions for 3D Human Shape and Pose Estimation from Images in the Wild (ICCV 2021)

Hierarchical Kinematic Probability Distributions for 3D Human Shape and Pose Estimation from Images in the Wild Akash Sengupta, Ignas Budvytis, Robert

Akash Sengupta 149 Dec 14, 2022
Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task

multi-task_losses_optimizer Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task 已经实验过了,不会有cuda out of memory情况 ##Par

14 Dec 25, 2022
Unconstrained Text Detection with Box Supervisionand Dynamic Self-Training

SelfText Beyond Polygon: Unconstrained Text Detection with Box Supervisionand Dynamic Self-Training Introduction This is a PyTorch implementation of "

weijiawu 34 Nov 09, 2022
Object detection using yolo-tiny model and opencv used as backend

Object detection Algorithm used : Yolo algorithm Backend : opencv Library required: opencv = 4.5.4-dev' Quick Overview about structure 1) main.py Load

2 Jul 06, 2022
[Preprint] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang

Chasing Sparsity in Vision Transformers: An End-to-End Exploration Codes for [Preprint] Chasing Sparsity in Vision Transformers: An End-to-End Explora

VITA 64 Dec 08, 2022
The object detection pipeline is based on Ultralytics YOLOv5

AYOLOv2 The main goal of this repository is to rewrite the object detection pipeline with a better code structure for better portability and adaptabil

153 Dec 22, 2022
All public open-source implementations of convnets benchmarks

convnet-benchmarks Easy benchmarking of all public open-source implementations of convnets. A summary is provided in the section below. Machine: 6-cor

Soumith Chintala 2.7k Dec 30, 2022
A TensorFlow implementation of the Mnemonic Descent Method.

MDM A Tensorflow implementation of the Mnemonic Descent Method. Mnemonic Descent Method: A recurrent process applied for end-to-end face alignment G.

123 Oct 07, 2022
Learning To Have An Ear For Face Super-Resolution

Learning To Have An Ear For Face Super-Resolution [Project Page] This repository contains demo code of our CVPR2020 paper. Training and evaluation on

50 Nov 16, 2022
[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

CodingMan 45 Dec 12, 2022
This repo contains implementation of different architectures for emotion recognition in conversations.

Emotion Recognition in Conversations Updates 🔥 🔥 🔥 Date Announcements 03/08/2021 🎆 🎆 We have released a new dataset M2H2: A Multimodal Multiparty

Deep Cognition and Language Research (DeCLaRe) Lab 1k Dec 30, 2022
ArtEmis: Affective Language for Art

ArtEmis: Affective Language for Art Created by Panos Achlioptas, Maks Ovsjanikov, Kilichbek Haydarov, Mohamed Elhoseiny, Leonidas J. Guibas Introducti

Panos 268 Dec 12, 2022
JAX + dataclasses

jax_dataclasses jax_dataclasses provides a wrapper around dataclasses.dataclass for use in JAX, which enables automatic support for: Pytree registrati

Brent Yi 35 Dec 21, 2022
Improving Deep Network Debuggability via Sparse Decision Layers

Improving Deep Network Debuggability via Sparse Decision Layers This repository contains the code for our paper: Leveraging Sparse Linear Layers for D

Madry Lab 35 Nov 14, 2022
Pacman-AI - AI project designed by UC Berkeley. Designed reflex and minimax agents for the game Pacman.

Pacman AI Jussi Doherty CAP 4601 - Introduction to Artificial Intelligence - Fall 2020 Python version 3.0+ Source of this project This repo contains a

Jussi Doherty 1 Jan 03, 2022