Cooperative Driving Dataset: a dataset for multi-agent driving scenarios

Overview

Cooperative Driving Dataset (CODD)

DOI CC BY-SA 4.0

The Cooperative Driving dataset is a synthetic dataset generated using CARLA that contains lidar data from multiple vehicles navigating simultaneously through a diverse set of driving scenarios. This dataset was created to enable further research in multi-agent perception (cooperative perception) including cooperative 3D object detection, cooperative object tracking, multi-agent SLAM and point cloud registration. Towards that goal, all the frames have been labelled with ground-truth sensor pose and 3D object bounding boxes.

This repository details the organisation of the dataset, including its data sctructure, and how to visualise the data. Additionally, it contains the code used to create the dataset, allowing users to customly create their own dataset.

static frame video showing frames

Data structure

The dataset is composed of snippets, each containing a sequence of temporal frames in one driving environment. Each frame in a snippet corresponds to a temporal slice of data, containing sensor data (lidar) from all vehicles in that environment, as well as the absolute pose of the sensor and ground-truth annotations for the 3D bounding boxes of vehicles and pedestrians. Each snippet is saved as an HDF5 file containing the following arrays (HDF5 datasets):

  • pointcloud with dimensions [frames, vehicles, points_per_cloud, 4] where the last dimensions represent the X,Y,Z and intensity coordinates of the lidar points in the local sensor coordinate system.
  • lidar_pose with dimensions [frames, vehicles, 6] where the last coordinates represent the X,Y,Z,pitch,yaw,roll of the global sensor pose. These can be used to compute the transformation that maps from the local sensor coordinate system to the global coordinate system.
  • vehicle_boundingbox with dimensions [frames, vehicles, 8] where the last coordinates represent the 3D Bounding Box encoded by X,Y,Z,yaw,pitch,Width,Length,Height. Note that the X,Y,Z correspond to the centre of the 3DBB in the global coordinate system. The roll angle is ignored (roll=0).
  • pedestrian_boundingbox with dimensions [frames, pedestrians , 8] where the last coordinates represent the 3DBB encoded as before.

Where

  • frames indicate the number of frames in the snippet.
  • vehicles is the number of vehicles in the environment. Note that all vehicles have lidars that we use to collect data.
  • point_per_cloud is the maximum number of points per pointcloud. Sometimes a given pointcloud will have less points that this maximum, in that case we pad the entries with zeros to be able to concatenate them into a uniformly sized array.
  • pedestrians is the number of pedestrians in the environment.

Notes:

  1. The point clouds are in the local coordinate system of each sensor, where the transformation from local to global coordinate system is computed using lidar_pose.
  2. Angles are always in degrees.
  3. Pose is represented using the UnrealEngine4 left-hand coordinate system. An example to reconstruct a transformation matrix from local -> global is available in vis.py, where such matrix is used to aggregate all local lidar point clouds into a global reference system.
  4. The vehicle index is shared across pointcloud, lidar_pose and vehicle_boundingbox, i.e. the point cloud at index [frame,i] correspond to the vehicle with bounding box at [frame,i].
  5. The vehicle and pedestrian indices are consistent across frames, allowing to determine the track of a given vehicle/pedestrian.
  6. All point clouds of a given frame are synchronised in time - they were captured at exactly the same time instant.

Downloading the Dataset

Although this repository provides the tools to generate your own dataset (see Generating your own data), we have generated an official release of the dataset.

This dataset contains 108 snippets across all available CARLA maps. The snippets file names encode the properties of the snippets as m[mapNumber]v[numVehicles]p[numPedestrians]s[seed].hdf5.

Download here.

This official dataset was generated with the following settings:

  • 5 fps
  • 125 frames (corresponding to 25s of simulation time per snippet)
  • 50k points per cloud
  • 100m lidar range
  • 30 burnt frames (discarded frames in the beggining of simulation)
  • nvehicles sampled from a binomial distribution with mean 10 and var 5
  • npedestrians sampled from a binomial distribution with mean 5 and var 2

Visualising the snippets

To visualise the data, please install the following dependencies:

  • Python 3.x
  • h5py
  • numpy
  • Mayavi >= 4.7.2

Then run:

python vis.py [path_to_snippet]

Note that you may want to pause the animation and adjust the view. The visualisation iteratively goes through all the frames, presenting the fusion of the point cloud from all vehicles transformed to the global coordinate system. It also shows the ground-truth bounding boxes for vehicles (in green) and pedestrians (in cyan).

video showing frames

Generating your own data

Requirements

Before getting started, please install the following dependencies:

  • CARLA >= 0.9.10
  • Python 3.x
  • h5py
  • numpy

Note: If the CARLA python package is not available in the python path you need to manually provide the path to the .egg file in fixpath.py.

Creating snippets

To generate the data one must firstly start the CARLA simlator:

cd CARLA_PATH
./CARLAUE4.sh

Then one can create a snippet using

python genSnippet.py --map Town03 --fps 5 --frames 50 --burn 30 --nvehicles 10 --npedestrians 3 --range 100 -s test.hdf5

This creates a snippet test.hdf5 in Town03 with a rate of 5 frames per second, saving 50 frames (corresponds to 10s of simulation time) in a scenario with 10 vehicles (we collect lidar data from all of them) and 3 pedestrians.

The burn argument is used to discard the first 30 frames since the vehicles will be stopped or slowly moving (due to inertia), so we would get many highly correlated frames without new information.

Note that this script randomly select a location in the map and tries to spawn all the vehicles within range meters of this location, which increases the likelihood the vehicles will share their field-of-view (see one another).

The range also specifies the maximum range of the lidar sensors.

The seed argument defines the RNG seed which allows to reproduce the same scenario (spawn points, trajectories, etc) and change any sensor characteristics across runs.

For more options, such as the number of points per cloud or the number of lidar lasers, or the lower lidar angle, see python genSnippet.py -h.

Creating a collection of snippets

Alternatively, to generate a collection of snippets one can use

python genDataset.py N

where N specifies the number of snippets to generate. This script randomly selects a map and sample from specific distributions for number of vehicles and pedestrians. Other options may be individually set-up within the script.

Note: Town06,Town07 and Town10HD need to be installed separately in CARLA, see here.

Citation

If you use our dataset or generate your own dataset using parts of our code, please cite

@article{arnold_fast_reg,
	title={{Fast and Robust Registration of Partially Overlapping Point Clouds}},
	author={Arnold, Eduardo and Mozaffari, Sajjad and Dianati, Mehrdad},
	year={2021}
}

License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

CC BY-SA 4.0

Owner
Eduardo Henrique Arnold
PhD candidate at WMG, University of Warwick. Working on perception methods for autonomous vehicles. 🚗
Eduardo Henrique Arnold
PAMI stands for PAttern MIning. It constitutes several pattern mining algorithms to discover interesting patterns in transactional/temporal/spatiotemporal databases

Introduction PAMI stands for PAttern MIning. It constitutes several pattern mining algorithms to discover interesting patterns in transactional/tempor

RAGE UDAY KIRAN 43 Jan 08, 2023
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information

ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information This repository contains code, model, dataset for ChineseBERT at ACL2021. Ch

413 Dec 01, 2022
Membership Inference Attack against Graph Neural Networks

MIA GNN Project Starter If you meet the version mismatch error for Lasagne library, please use following command to upgrade Lasagne library. pip insta

6 Nov 09, 2022
Official git repo for the CHIRP project

CHIRP Project This is the official git repository for the CHIRP project. Pull requests are accepted here, but for the moment, the main repository is s

Dan Smith 77 Jan 08, 2023
PyTorch implementation of our paper How robust are discriminatively trained zero-shot learning models?

How robust are discriminatively trained zero-shot learning models? This repository contains the PyTorch implementation of our paper How robust are dis

Mehmet Kerim Yucel 5 Feb 04, 2022
An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"

Channel LM Prompting (and beyond) This includes an original implementation of Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer. "Noisy Cha

Sewon Min 92 Jan 07, 2023
Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

STARS Laboratory 8 Sep 14, 2022
All the code and files related to the MI-Lab of UE19CS305 course in sem 5

Machine-Intelligence-Lab-CS305 The compilation of all the code an drelated files from MI-Lab UE19CS305 (of batch 2019-2023) offered by PES University

Arvind Krishna 3 Nov 10, 2022
Code of 3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces

3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces Installation After cloning the repo open

37 Dec 03, 2022
Using OpenAI's CLIP to upscale and enhance images

CLIP Upscaler and Enhancer Using OpenAI's CLIP to upscale and enhance images Based on nshepperd's JAX CLIP Guided Diffusion v2.4 Sample Results Viewpo

Tripp Lyons 5 Jun 14, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
The Agriculture Domain of ERPNext comes with features to record crops and land

Agriculture The Agriculture Domain of ERPNext comes with features to record crops and land, track plant, soil, water, weather analytics, and even trac

Frappe 21 Jan 02, 2023
TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)

TensorFlow Examples This tutorial was designed for easily diving into TensorFlow, through examples. For readability, it includes both notebooks and so

Aymeric Damien 42.5k Jan 08, 2023
Reference PyTorch implementation of "End-to-end optimized image compression with competition of prior distributions"

PyTorch reference implementation of "End-to-end optimized image compression with competition of prior distributions" by Benoit Brummer and Christophe

Benoit Brummer 6 Jun 16, 2022
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 883 Jan 07, 2023
Libraries, tools and tasks created and used at DeepMind Robotics.

Libraries, tools and tasks created and used at DeepMind Robotics.

DeepMind 270 Nov 30, 2022
Official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). VaxNeRF provides very fast training and slightl

naruya 132 Nov 21, 2022
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

Karan Desai 105 Nov 25, 2022
Can we do Customers Segmentation using PHP and Unsupervized Machine Learning ? Yes we can ! 🤡

Customers Segmentation using PHP and Rubix ML PHP Library Can we do Customers Segmentation using PHP and Unsupervized Machine Learning ? Yes we can !

Mickaël Andrieu 11 Oct 08, 2022
Multiple paper open-source codes of the Microsoft Research Asia DKI group

📫 Paper Code Collection (MSRA DKI Group) This repo hosts multiple open-source codes of the Microsoft Research Asia DKI Group. You could find the corr

Microsoft 249 Jan 08, 2023