Cooperative Driving Dataset: a dataset for multi-agent driving scenarios

Overview

Cooperative Driving Dataset (CODD)

DOI CC BY-SA 4.0

The Cooperative Driving dataset is a synthetic dataset generated using CARLA that contains lidar data from multiple vehicles navigating simultaneously through a diverse set of driving scenarios. This dataset was created to enable further research in multi-agent perception (cooperative perception) including cooperative 3D object detection, cooperative object tracking, multi-agent SLAM and point cloud registration. Towards that goal, all the frames have been labelled with ground-truth sensor pose and 3D object bounding boxes.

This repository details the organisation of the dataset, including its data sctructure, and how to visualise the data. Additionally, it contains the code used to create the dataset, allowing users to customly create their own dataset.

static frame video showing frames

Data structure

The dataset is composed of snippets, each containing a sequence of temporal frames in one driving environment. Each frame in a snippet corresponds to a temporal slice of data, containing sensor data (lidar) from all vehicles in that environment, as well as the absolute pose of the sensor and ground-truth annotations for the 3D bounding boxes of vehicles and pedestrians. Each snippet is saved as an HDF5 file containing the following arrays (HDF5 datasets):

  • pointcloud with dimensions [frames, vehicles, points_per_cloud, 4] where the last dimensions represent the X,Y,Z and intensity coordinates of the lidar points in the local sensor coordinate system.
  • lidar_pose with dimensions [frames, vehicles, 6] where the last coordinates represent the X,Y,Z,pitch,yaw,roll of the global sensor pose. These can be used to compute the transformation that maps from the local sensor coordinate system to the global coordinate system.
  • vehicle_boundingbox with dimensions [frames, vehicles, 8] where the last coordinates represent the 3D Bounding Box encoded by X,Y,Z,yaw,pitch,Width,Length,Height. Note that the X,Y,Z correspond to the centre of the 3DBB in the global coordinate system. The roll angle is ignored (roll=0).
  • pedestrian_boundingbox with dimensions [frames, pedestrians , 8] where the last coordinates represent the 3DBB encoded as before.

Where

  • frames indicate the number of frames in the snippet.
  • vehicles is the number of vehicles in the environment. Note that all vehicles have lidars that we use to collect data.
  • point_per_cloud is the maximum number of points per pointcloud. Sometimes a given pointcloud will have less points that this maximum, in that case we pad the entries with zeros to be able to concatenate them into a uniformly sized array.
  • pedestrians is the number of pedestrians in the environment.

Notes:

  1. The point clouds are in the local coordinate system of each sensor, where the transformation from local to global coordinate system is computed using lidar_pose.
  2. Angles are always in degrees.
  3. Pose is represented using the UnrealEngine4 left-hand coordinate system. An example to reconstruct a transformation matrix from local -> global is available in vis.py, where such matrix is used to aggregate all local lidar point clouds into a global reference system.
  4. The vehicle index is shared across pointcloud, lidar_pose and vehicle_boundingbox, i.e. the point cloud at index [frame,i] correspond to the vehicle with bounding box at [frame,i].
  5. The vehicle and pedestrian indices are consistent across frames, allowing to determine the track of a given vehicle/pedestrian.
  6. All point clouds of a given frame are synchronised in time - they were captured at exactly the same time instant.

Downloading the Dataset

Although this repository provides the tools to generate your own dataset (see Generating your own data), we have generated an official release of the dataset.

This dataset contains 108 snippets across all available CARLA maps. The snippets file names encode the properties of the snippets as m[mapNumber]v[numVehicles]p[numPedestrians]s[seed].hdf5.

Download here.

This official dataset was generated with the following settings:

  • 5 fps
  • 125 frames (corresponding to 25s of simulation time per snippet)
  • 50k points per cloud
  • 100m lidar range
  • 30 burnt frames (discarded frames in the beggining of simulation)
  • nvehicles sampled from a binomial distribution with mean 10 and var 5
  • npedestrians sampled from a binomial distribution with mean 5 and var 2

Visualising the snippets

To visualise the data, please install the following dependencies:

  • Python 3.x
  • h5py
  • numpy
  • Mayavi >= 4.7.2

Then run:

python vis.py [path_to_snippet]

Note that you may want to pause the animation and adjust the view. The visualisation iteratively goes through all the frames, presenting the fusion of the point cloud from all vehicles transformed to the global coordinate system. It also shows the ground-truth bounding boxes for vehicles (in green) and pedestrians (in cyan).

video showing frames

Generating your own data

Requirements

Before getting started, please install the following dependencies:

  • CARLA >= 0.9.10
  • Python 3.x
  • h5py
  • numpy

Note: If the CARLA python package is not available in the python path you need to manually provide the path to the .egg file in fixpath.py.

Creating snippets

To generate the data one must firstly start the CARLA simlator:

cd CARLA_PATH
./CARLAUE4.sh

Then one can create a snippet using

python genSnippet.py --map Town03 --fps 5 --frames 50 --burn 30 --nvehicles 10 --npedestrians 3 --range 100 -s test.hdf5

This creates a snippet test.hdf5 in Town03 with a rate of 5 frames per second, saving 50 frames (corresponds to 10s of simulation time) in a scenario with 10 vehicles (we collect lidar data from all of them) and 3 pedestrians.

The burn argument is used to discard the first 30 frames since the vehicles will be stopped or slowly moving (due to inertia), so we would get many highly correlated frames without new information.

Note that this script randomly select a location in the map and tries to spawn all the vehicles within range meters of this location, which increases the likelihood the vehicles will share their field-of-view (see one another).

The range also specifies the maximum range of the lidar sensors.

The seed argument defines the RNG seed which allows to reproduce the same scenario (spawn points, trajectories, etc) and change any sensor characteristics across runs.

For more options, such as the number of points per cloud or the number of lidar lasers, or the lower lidar angle, see python genSnippet.py -h.

Creating a collection of snippets

Alternatively, to generate a collection of snippets one can use

python genDataset.py N

where N specifies the number of snippets to generate. This script randomly selects a map and sample from specific distributions for number of vehicles and pedestrians. Other options may be individually set-up within the script.

Note: Town06,Town07 and Town10HD need to be installed separately in CARLA, see here.

Citation

If you use our dataset or generate your own dataset using parts of our code, please cite

@article{arnold_fast_reg,
	title={{Fast and Robust Registration of Partially Overlapping Point Clouds}},
	author={Arnold, Eduardo and Mozaffari, Sajjad and Dianati, Mehrdad},
	year={2021}
}

License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

CC BY-SA 4.0

Owner
Eduardo Henrique Arnold
PhD candidate at WMG, University of Warwick. Working on perception methods for autonomous vehicles. 🚗
Eduardo Henrique Arnold
RealFormer-Pytorch Implementation of RealFormer using pytorch

RealFormer-Pytorch Implementation of RealFormer using pytorch. Includes comparison with classical Transformer on image classification task (ViT) wrt C

Simo Ryu 90 Dec 08, 2022
Network Enhancement implementation in pytorch

network_enahncement_pytorch Network Enhancement implementation in pytorch Research paper Network Enhancement: a general method to denoise weighted bio

Yen 1 Nov 12, 2021
Good Semi-Supervised Learning That Requires a Bad GAN

Good Semi-Supervised Learning that Requires a Bad GAN This is the code we used in our paper Good Semi-supervised Learning that Requires a Bad GAN Ziha

Zhilin Yang 177 Dec 12, 2022
Machine learning library for fast and efficient Gaussian mixture models

This repository contains code which implements the Stochastic Gaussian Mixture Model (S-GMM) for event-based datasets Dependencies CMake Premake4 Blaz

Omar Oubari 1 Dec 19, 2022
DropNAS: Grouped Operation Dropout for Differentiable Architecture Search

DropNAS: Grouped Operation Dropout for Differentiable Architecture Search DropNAS, a grouped operation dropout method for one-level DARTS, with better

weijunhong 4 Aug 15, 2022
Safe Control for Black-box Dynamical Systems via Neural Barrier Certificates

Safe Control for Black-box Dynamical Systems via Neural Barrier Certificates Installation Clone the repository: git clone https://github.com/Zengyi-Qi

Zengyi Qin 3 Oct 18, 2022
The code for our paper CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention.

CrossFormer This repository is the code for our paper CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention. Introduction Existin

cheerss 238 Jan 06, 2023
Gated-Shape CNN for Semantic Segmentation (ICCV 2019)

GSCNN This is the official code for: Gated-SCNN: Gated Shape CNNs for Semantic Segmentation Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler

859 Dec 26, 2022
Single Image Random Dot Stereogram for Tensorflow

TensorFlow-SIRDS Single Image Random Dot Stereogram for Tensorflow SIRDS is a means to present 3D data in a 2D image. It allows for scientific data di

Greg Peatfield 5 Aug 10, 2022
Perturb-and-max-product: Sampling and learning in discrete energy-based models

Perturb-and-max-product: Sampling and learning in discrete energy-based models This repo contains code for reproducing the results in the paper Pertur

Vicarious 2 Mar 14, 2022
A compendium of useful, interesting, inspirational usage of pandas functions, each example will be an ipynb file

Pandas_by_examples A compendium of useful/interesting/inspirational usage of pandas functions, each example will be an ipynb file What is this reposit

Guangyuan(Frank) Li 32 Nov 20, 2022
Hyperparameter tuning for humans

KerasTuner KerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily c

Keras 2.6k Dec 27, 2022
Deep Learning Algorithms for Hedging with Frictions

Deep Learning Algorithms for Hedging with Frictions This repository contains the Forward-Backward Stochastic Differential Equation (FBSDE) solver and

Xiaofei Shi 3 Dec 22, 2022
Collection of machine learning related notebooks to share.

ML_Notebooks Collection of machine learning related notebooks to share. Notebooks GAN_distributed_training.ipynb In this Notebook, TensorFlow's tutori

Sascha Kirch 14 Dec 22, 2022
Learning Optical Flow from a Few Matches (CVPR 2021)

Learning Optical Flow from a Few Matches This repository contains the source code for our paper: Learning Optical Flow from a Few Matches CVPR 2021 Sh

Shihao Jiang (Zac) 159 Dec 16, 2022
Deep Learning (with PyTorch)

Deep Learning (with PyTorch) This notebook repository now has a companion website, where all the course material can be found in video and textual for

Alfredo Canziani 6.2k Jan 07, 2023
A Lightweight Face Recognition and Facial Attribute Analysis (Age, Gender, Emotion and Race) Library for Python

deepface Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid

Sefik Ilkin Serengil 5.2k Jan 02, 2023
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.

CLIP-Guided-Diffusion Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab. Original colab notebooks by Ka

Nerdy Rodent 336 Dec 09, 2022
Distilled coarse part of LoFTR adapted for compatibility with TensorRT and embedded divices

Coarse LoFTR TRT Google Colab demo notebook This project provides a deep learning model for the Local Feature Matching for two images that can be used

Kirill 46 Dec 24, 2022
Code for the paper "Controllable Video Captioning with an Exemplar Sentence"

SMCG Code for the paper "Controllable Video Captioning with an Exemplar Sentence" Introduction We investigate a novel and challenging task, namely con

10 Dec 04, 2022