Complete* list of autonomous driving related datasets

Overview

AD Datasets

Complete* and curated list of autonomous driving related datasets

Contributing

Contributions are very welcome! To add or update a dataset:

  • Update my-app/src/data.js: image

  • Make sure the dataset you add or edit has as many attributes as possible filled out:

    • Some attributes can only be found in associated papers
    • Some attributes can only be found in associated websites
    • Some attributes can only be found in the dataset itself
  • Send a pull request based on the created fork

Example Contribution

This is how the KITTI dataset is integrated into the website:

[...]
{
    id: "KITTI", //07.08. fertig
    href: "http://www.cvlibs.net/datasets/kitti/",
    size_hours: "6",
    size_storage: "180",
    frames: "",
    numberOfScenes: '50',
    samplingRate: "10",
    lengthOfScenes: "",
    sensors: "camera, lidar, gps/imu",
    sensorDetail: "2 greyscale cameras 1.4 MP, 2 color cameras 1.4 MP, 1 lidar 64 beams 360° 10Hz, 1 inertial and " +
        "GPS navigation system",
    benchmark: " stereo, optical flow, visual odometry, slam, 3d object detection, 3d object tracking",
    annotations: "3d bounding boxes",
    licensing: "Creative Commons Attribution-NonCommercial-ShareAlike 3.0",
    relatedDatasets: 'Semantic KITTI, KITTI-360',
    publishDate: new Date("2012-3").toISOString().split('T')[0],
    lastUpdate: new Date("2021-2").toISOString().split('T')[0],
    relatedPaper: "http://www.cvlibs.net/publications/Geiger2013IJRR.pdf",
    location: "Karlsruhe, Germany",
    rawData: "Yes"
},
[...]

* You're missing a dataset? Simply create a pull request ;)

Metadata

In the following, the scheme according to which the entries of the respective properties have resulted is illuminated.

Annotations

This property describes the types of annotations with which the data sets have been provided.

Benchmark

If benchmark challenges are explicitly listed with the data sets, they are specified here.

Frames

Frames states the number of frames in the data set. This includes training, test and validation data.

Last Update

If information has been provided on updates and their dates, they can be found in this category.

Licensing

In order to give the users an impression of the licenses of the data sets, information on them is already included in the tool. Location. This category lists the areas where the data sets have been recorded.

N° Scenes

N° Scenes shows the number of scenes contained in the data set and includes the training, testing and validation segments. In the case of video recordings, one recording corresponds to one scene. For data sets consisting of photos, a photo is the equivalent to a scene.

Publish Date

The initial publication date of the data set can be found under this category. If no explicit information on the date of publication of the data set could be found, the submission date of the paper related to the set was used at this point.

Related Data Sets

If data sets are related, the names of the related sets can be examined as well. Related data sets are, for example, those published by the same authors and building on one another.

Related Paper

This property solely consists of a link to the paper related to the data set. Sampling Rate [Hz]. The Sampling Rate [Hz] property specifies the sampling rate in Hertz at which the sensors in the data set work. However, this declaration is only made if all sensors are working at the same rate or, alternatively, if the sensors are being synchronized. Otherwise the field remains empty.

Scene Length [s]

This property describes the length of the scenes in seconds in the data set, provided all scenes have the same length. Otherwise no information is given. For example, if a data set has scenes with lengths between 30 and 60 seconds, no entry can be made. The background to this procedure is to maintain comparability and sortability.

Sensor Types

This category contains a rough description of the sensor types used. Sensor types are, for example, lidar or radar.

Sensors - Details

The Sensors - Detail category is an extension of the Sensor Types category. It includes a more detailed description of the sensors. The sensors are described in detail in terms of type and number, the frame rates they work with, the resolutions which sensors have and the horizontal field of view.

Size [GB]

The category Size [GB] describes the storage size of the data set in gigabytes.

Size [h]

The Size [h] property is the equivalent of the Size [GB] described above, but provides information on the size of the data set in hours.

Location

The place(s) the data was recorded at

rawData

Denotes if the dataset provides raw or processed data

Citation

If you find this code useful for your research, please cite our paper:

@article{Bogdoll_addatasets_2022_VEHITS,
    author    = {Bogdoll, Daniel and Schreyer, Felix, and Z\"{o}llner, J. Marius},
    title     = {{ad-datasets: a meta-collection of data sets for autonomous driving}},
    journal   = {arXiv preprint:2202.01909},
    year      = {2022},
}
Owner
Daniel Bogdoll
PhD student at FZI and KIT with a focus on deep learning and autonomous driving.
Daniel Bogdoll
The 2nd place solution of 2021 google landmark retrieval on kaggle.

Leaderboard, taxonomy, and curated list of few-shot object detection papers.

229 Dec 13, 2022
We simulate traveling back in time with a modern camera to rephotograph famous historical subjects.

[SIGGRAPH Asia 2021] Time-Travel Rephotography [Project Website] Many historical people were only ever captured by old, faded, black and white photos,

298 Jan 02, 2023
AI Virtual Calculator: This is a simple virtual calculator based on Artificial intelligence.

AI Virtual Calculator: This is a simple virtual calculator that works with gestures using OpenCV. We will use our hand in the air to click on the calc

Md. Rakibul Islam 1 Jan 13, 2022
A tutorial on training a DarkNet YOLOv4 model for the CrowdHuman dataset

YOLOv4 CrowdHuman Tutorial This is a tutorial demonstrating how to train a YOLOv4 people detector using Darknet and the CrowdHuman dataset. Table of c

JK Jung 118 Nov 10, 2022
Zeyuan Chen, Yangchao Wang, Yang Yang and Dong Liu.

Principled S2R Dehazing This repository contains the official implementation for PSD Framework introduced in the following paper: PSD: Principled Synt

zychen 78 Dec 30, 2022
CLOOB training (JAX) and inference (JAX and PyTorch)

cloob-training Pretrained models There are two pretrained CLOOB models in this repo at the moment, a 16 epoch and a 32 epoch ViT-B/16 checkpoint train

Katherine Crowson 64 Nov 27, 2022
Computer Vision application in the web

Computer Vision application in the web Preview Usage Clone this repo git clone https://github.com/amineHY/WebApp-Computer-Vision-streamlit.git cd Web

Amine Hadj-Youcef. PhD 35 Dec 06, 2022
Predicts an answer in yes or no.

Oui-ou-non-prediction Predicts an answer in 'yes' or 'no'. It is based on the game 'effeuiller la marguerite' in which the person plucks flower petals

Ananya Gupta 1 Jan 15, 2022
Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORAL)

Scribble-Supervised LiDAR Semantic Segmentation Dataset and code release for the paper Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORA

102 Dec 25, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022
Python tools for 3D face: 3DMM, Mesh processing(transform, camera, light, render), 3D face representations.

face3d: Python tools for processing 3D face Introduction This project implements some basic functions related to 3D faces. You can use this to process

Yao Feng 2.3k Dec 30, 2022
Real-time object detection on Android using the YOLO network with TensorFlow

TensorFlow YOLO object detection on Android Source project android-yolo is the first implementation of YOLO for TensorFlow on an Android device. It is

Nataniel Ruiz 624 Jan 03, 2023
Unofficial implementation of "TTNet: Real-time temporal and spatial video analysis of table tennis" (CVPR 2020)

TTNet-Pytorch The implementation for the paper "TTNet: Real-time temporal and spatial video analysis of table tennis" An introduction of the project c

Nguyen Mau Dung 438 Dec 29, 2022
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration (NeurIPS 2021) PyTorch implementation of the paper: CoFiNet: Reli

76 Jan 03, 2023
A DeepStack custom model for detecting common objects in dark/night images and videos.

DeepStack_ExDark This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API for d

MOSES OLAFENWA 98 Dec 24, 2022
The implemention of Video Depth Estimation by Fusing Flow-to-Depth Proposals

Flow-to-depth (FDNet) video-depth-estimation This is the implementation of paper Video Depth Estimation by Fusing Flow-to-Depth Proposals Jiaxin Xie,

32 Jun 14, 2022
Keras implementation of the GNM model in paper ’Graph-Based Semi-Supervised Learning with Nonignorable Nonresponses‘

Graph-based joint model with Nonignorable Missingness (GNM) This is a Keras implementation of the GNM model in paper ’Graph-Based Semi-Supervised Lear

Fan Zhou 2 Apr 17, 2022
Spectral Tensor Train Parameterization of Deep Learning Layers

Spectral Tensor Train Parameterization of Deep Learning Layers This repository is the official implementation of our AISTATS 2021 paper titled "Spectr

Anton Obukhov 12 Oct 23, 2022
unofficial pytorch implement of "Squareplus: A Softplus-Like Algebraic Rectifier"

SquarePlus (Pytorch implement) unofficial pytorch implement of "Squareplus: A Softplus-Like Algebraic Rectifier" SquarePlus Squareplus is a Softplus-L

SeeFun 3 Dec 29, 2021
Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Roxbili 5 Nov 19, 2022