Scenarios, tutorials and demos for Autonomous Driving

Overview

The Autonomous Driving Cookbook (Preview)


NOTE:

This project is developed and being maintained by Project Road Runner at Microsoft Garage. This is currently a work in progress. We will continue to add more tutorials and scenarios based on requests from our users and the availability of our collaborators.


Autonomous driving has transcended far beyond being a crazy moonshot idea over the last half decade or so. It has quickly become one of the biggest technologies today that promises to shape our tomorrow, not very unlike when cars first came into existence. A big driver powering this change is the recent advances in software (Artificial Intelligence), hardware (GPUs, FPGAs etc.) and cloud computing, which have enabled ingest and processing of large amounts of data, making it possible for companies to push for levels 4 and 5 of autonomy. Achieving those levels of autonomy though, require training on hundreds of millions and sometimes hundreds of billions of miles worth of training data to demonstrate reliability, according to a report from RAND.

Despite the large amount of data collected every day, it is still insufficient to meet the demands of the ever increasing AI model complexity required by autonomous vehicles. One way to collect such huge amounts of data is through the use of simulation. Simulation makes it easy to not only collect data from a variety of different scenarios which would take days, if not months in the real world (like different weather conditions, varying daylight etc.), it also provides a safe test bed for trained models. With behavioral cloning, you can easily prepare highly efficient models in simulation and fine tune them using a relatively low amount of real world data. Then there are models built using techniques like Reinforcement Learning, which can only be trained in simulation. With simulators such as AirSim, working on these scenarios has become very easy.

We believe that the best way to make a technology grow is by making it easily available and accessible to everyone. This is best achieved by making the barrier of entry to it as low as possible. At Microsoft, our mission is to empower every person and organization on the planet to achieve more. That has been our primary motivation behind preparing this cookbook. Our aim with this project is to help you get quickly acquainted and familiarized with different onboarding scenarios in autonomous driving so you can take what you learn here and employ it in your everyday job with a minimal barrier to entry.

Who is this cookbook for?

Our plan is to make this cookbook a valuable resource for beginners, researchers and industry experts alike. Tutorials in the cookbook are presented as Jupyter notebooks, making it very easy for you to download the instructions and get started without a lot of setup time. To help this further, wherever needed, tutorials come with their own datasets, helper scripts and binaries. While the tutorials leverage popular open-source tools (like Keras, TensorFlow etc.) as well as Microsoft open-source and commercial technology (like AirSim, Azure virtual machines, Batch AI, CNTK etc.), the primary focus is on the content and learning, enabling you to take what you learn here and apply it to your work using tools of your choice.

We would love to hear your feedback on how we can evolve this project to reach that goal. Please use the GitHub Issues section to get in touch with us regarding ideas and suggestions.

Tutorials available

Currently, the following tutorials are available:

Following tutorials will be available soon:

  • Lane Detection using Deep Learning

Contributing

Please read the instructions and guidelines for collaborators if you wish to add a new tutorial to the cookbook.

This project welcomes and encourages contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
An self sufficient AI that crawls the web to learn how to generate art from keywords

Roxx-IO - The Smart Artist AI! TO DO / IDEAS Implement Web-Scraping Functionality Figure out a less annoying (and an off button for it) text to speech

Tatz 5 Mar 21, 2022
[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition

Counterfactual Zero-Shot and Open-Set Visual Recognition This project provides implementations for our CVPR 2021 paper Counterfactual Zero-S

144 Dec 24, 2022
RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch

RETRO - Pytorch (wip) Implementation of RETRO, Deepmind's Retrieval based Attent

Phil Wang 556 Jan 04, 2023
BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond

BasicVSR BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond Ported from https://github.com/xinntao/BasicSR Dependencie

Holy Wu 8 Jun 07, 2022
The code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning"

The Code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning" Setting up and using the repo Get the dataset. Follow

4 Apr 20, 2022
Planning from Pixels in Environments with Combinatorially Hard Search Spaces -- NeurIPS 2021

PPGS: Planning from Pixels in Environments with Combinatorially Hard Search Spaces Environment Setup We recommend pipenv for creating and managing vir

Autonomous Learning Group 11 Jun 26, 2022
Useful materials and tutorials for 110-1 NTU DBME5028 (Application of Deep Learning in Medical Imaging)

Useful materials and tutorials for 110-1 NTU DBME5028 (Application of Deep Learning in Medical Imaging)

7 Jun 22, 2022
Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance Normalization

Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance Normalization Official PyTorch implementation for our URST (Ultra-Resolution Sty

czczup 148 Dec 27, 2022
Official implementation of particle-based models (GNS and DPI-Net) on the Physion dataset.

Physion: Evaluating Physical Prediction from Vision in Humans and Machines [paper] Daniel M. Bear, Elias Wang, Damian Mrowca, Felix J. Binder, Hsiao-Y

Hsiao-Yu Fish Tung 18 Dec 19, 2022
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet.

Ravens is a collection of simulated tasks in PyBullet for learning vision-based robotic manipulation, with emphasis on pick and place. It features a Gym-like API with 10 tabletop rearrangement tasks,

Google Research 367 Jan 09, 2023
SuperSDR: multiplatform KiwiSDR + CAT transceiver integrator

SuperSDR SuperSDR integrates a realtime spectrum waterfall and audio receive from any KiwiSDR around the world, together with a local (or remote) cont

Marco Cogoni 30 Nov 29, 2022
a Pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in 2021"

A pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in 2021" 1. Notes This is a pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in

91 Dec 26, 2022
An open-source Deep Learning Engine for Healthcare that aims to treat & prevent major diseases

AlphaCare Background AlphaCare is a work-in-progress, open-source Deep Learning Engine for Healthcare that aims to treat and prevent major diseases. T

Siraj Raval 44 Nov 05, 2022
Laplacian Score-regularized Concrete Autoencoders

Laplacian Score-regularized Concrete Autoencoders Requirements: torch = 1.9 scikit-learn = 0.24 omegaconf = 2.0.6 scipy = 1.6.0 matplotlib How to

JS 6 Dec 07, 2022
This repository contains the code for the paper ``Identifiable VAEs via Sparse Decoding''.

Sparse VAE This repository contains the code for the paper ``Identifiable VAEs via Sparse Decoding''. Data Sources The datasets used in this paper wer

Gemma Moran 17 Dec 12, 2022
Scalable training for dense retrieval models.

Scalable implementation of dense retrieval. Training on cluster By default it trains locally: PYTHONPATH=.:$PYTHONPATH python dpr_scale/main.py traine

Facebook Research 90 Dec 28, 2022
Python library containing BART query generation and BERT-based Siamese models for neural retrieval.

Neural Retrieval Embedding-based Zero-shot Retrieval through Query Generation leverages query synthesis over large corpuses of unlabeled text (such as

Amazon Web Services - Labs 35 Apr 14, 2022
MicroNet: Improving Image Recognition with Extremely Low FLOPs (ICCV 2021)

MicroNet: Improving Image Recognition with Extremely Low FLOPs (ICCV 2021) A pytorch implementation of MicroNet. If you use this code in your research

Yunsheng Li 293 Dec 28, 2022
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

Van Nguyen Nguyen 92 Dec 28, 2022
[CIKM 2021] Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning

Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning. This repo contains the PyTorch code and implementation for the paper E

Akuchi 18 Dec 22, 2022