This repository contains the code for designing risk bounded motion plans for car-like robot using Carla Simulator.

Overview

Nonlinear Risk Bounded Robot Motion Planning

This code simulates the bicycle dynamics of car by steering it on the road by avoiding another static car obstacle in a CARLA simulator. The ego_vehicle has to consider all the system and perception uncertainties to generate a risk-bounded motion plan and execute it with coherent risk assessment. Coherent risk assessment for a nonlinear robot like the car in this simulation is made possible using nonlinear model predictive control (NMPC) based steering law combined with Unscented Kalman filter for state estimation purpose. Finally, distributionally robust chance constraints applied using a temporal logic specifications evaluate the risk of a trajectory before being added to the sequence of trajectories forming a motion plan from the start to the destination.

Click the picture to watch the corresponding youtube video supporting our work

Motion Planning Using Carla Simulator

The code in this repository implements the algorithms and ideas from our following paper:

  1. V. Renganathan, S. Safaoui, A. Kothari, I. Shames, T. Summers, Risk Bounded Nonlinear Robot Motion Planning With Integrated Perception & Control, Submitted to the Special Issue on Risk-aware Autonomous Systems: Theory and Practice, Artificial Intelligence Journal, 2021.

Dependencies

  • Python 3.5+ (tested with 3.7.6)
  • Numpy
  • Scipy
  • Matplotlib
  • Casadi
  • Namedlist
  • Pickle
  • Carla

Installing

You will need the following two items to run the codes. After that there is no other formal package installation procedure; simply download this repository and run the Python files.

  • CARLA SIMULATOR VERSION: 0.9.10
  • UNREAL ENGINE VERSION: 4.24.3

Modules of an autonomy stack

There are two main modules for understanding this whole package

  1. First, a high level motion planner has to run and it will generate a reference trajectory for the car from start to the end
  2. Second, a low level tracking controller will enable the car to track the reference trajectory despite the realized noises.

Procedure to run the code

  1. Run the python code Generate_Monte_Carlo_Noises.py which will generate and load the required noise parameters and data required for simulation into pickle files
  2. Run the python code Run_Path_Planner.py
  3. The code will run for specified number of iterations and produces all required data
  4. Then load the cooresponding pickle file data in file main.py in the line number #488.
  5. Run the main.py file with the Carla executable being open already
  6. The simulation will run in the Carla simulator where the car will track the reference trajectory and results are stored in pickle files
  7. To see the tracking results, run the python file Tracked_Path_Plotter.py

Running Monte-Carlo Simulations

  1. Create a new folder called monte_carlo_results in the same directory where the python file monte_carlo_car.py resides.
  2. Update the trial_num at line #1554 in the file monte_carlo_car.py and run it while the Carla executable is open (It will automatically load the noise realizations corresponding to the trial_num from the pickle files)
  3. After the simulation is over, automatically the results are stored under the folder monte_carlo_results with a specific trial name
  4. Repeat the process by changing trial number in step 2 and run again.
  5. Once the all trials are completed, run the python file monte_carlo_results_plotter.py to plot the monte-carlo simulation results

Variations

  • Instead of Distributionally robust chance constraints, if you would like to have a simple Gaussian Chance Constraints, then change self.DRFlag = False in line 852 in the file DR_RRTStar_Planner.py
  • Choose your own state estimator UKF or EKF by commenting and uncommenting the corresponding estimator in lines 26-27 of file State_Estimator.py

Funding Acknowledgement

This work is partially supported by Defence Science and Technology Group, through agreement MyIP: ID10266 entitled Hierarchical Verification of Autonomy Architectures, the Australian Government, via grant AUSMURIB000001 associated with ONR MURI grant N00014-19-1-2571, and by the United States Air Force Office of Scientific Research under award number FA2386-19-1-4073.

Contributing Authors

  1. Venkatraman Renganathan - UT Dallas
  2. Sleiman Safaoui - UT Dallas
  3. Aadi Kothari - UT Dallas
  4. Benjamin Gravell - UT Dallas
  5. Dr. Iman Shames - Australian National University
  6. Dr. Tyler Summers - UT Dallas

Affiliation

TSummersLab - Control, Optimization & Networks Laboratory (CONLab)

App for identification of various objects. Based on YOLO v4 tiny architecture

Object_detection Repository containing trained model yolo v4 tiny, which is capable of identification 80 different classes Default feed is set to be a

Mateusz Kurdziel 0 Jun 22, 2022
Code Release for the paper "TriBERT: Full-body Human-centric Audio-visual Representation Learning for Visual Sound Separation"

TriBERT This repository contains the code for the NeurIPS 2021 paper titled "TriBERT: Full-body Human-centric Audio-visual Representation Learning for

UBC Computer Vision Group 8 Aug 31, 2022
[ICML 2022] The official implementation of Graph Stochastic Attention (GSAT).

Graph Stochastic Attention (GSAT) The official implementation of GSAT for our paper: Interpretable and Generalizable Graph Learning via Stochastic Att

85 Nov 27, 2022
EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
Machine Learning Model deployment for Container (TensorFlow Serving)

try_tf_serving ├───dataset │ ├───testing │ │ ├───paper │ │ ├───rock │ │ └───scissors │ └───training │ ├───paper │ ├───rock

Azhar Rizki Zulma 5 Jan 07, 2022
This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.

Introduction This is an official implementation of CvT: Introducing Convolutions to Vision Transformers. We present a new architecture, named Convolut

Microsoft 408 Dec 30, 2022
IhoneyBakFileScan Modify - 批量网站备份文件扫描器,增加文件规则,优化内存占用

ihoneyBakFileScan_Modify 批量网站备份文件泄露扫描工具 2022.2.8 添加、修改内容 增加备份文件fuzz规则 修改备份文件大小判断

VMsec 220 Jan 05, 2023
Orange Chicken: Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation

Orange Chicken: Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation This repository contains code and data f

Zoey Liu 0 Jan 07, 2022
OpenGAN: Open-Set Recognition via Open Data Generation

OpenGAN: Open-Set Recognition via Open Data Generation ICCV 2021 (oral) Real-world machine learning systems need to analyze novel testing data that di

Shu Kong 90 Jan 06, 2023
A fast python implementation of Ray Tracing in One Weekend using python and Taichi

ray-tracing-one-weekend-taichi A fast python implementation of Ray Tracing in One Weekend using python and Taichi. Taichi is a simple "Domain specific

157 Dec 26, 2022
Spatial-Temporal Transformer for Dynamic Scene Graph Generation, ICCV2021

Spatial-Temporal Transformer for Dynamic Scene Graph Generation Pytorch Implementation of our paper Spatial-Temporal Transformer for Dynamic Scene Gra

Yuren Cong 119 Jan 01, 2023
Deep Surface Reconstruction from Point Clouds with Visibility Information

Data, code and pretrained models for the paper Deep Surface Reconstruction from Point Clouds with Visibility Information.

Raphael Sulzer 23 Jan 04, 2023
How to use TensorLayer

How to use TensorLayer While research in Deep Learning continues to improve the world, we use a bunch of tricks to implement algorithms with TensorLay

zhangrui 349 Dec 07, 2022
Framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample resolution

Sample-specific Bayesian Networks A framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample or per-patient re

Caleb Ellington 1 Sep 23, 2022
AlphaNet Improved Training of Supernet with Alpha-Divergence

AlphaNet: Improved Training of Supernet with Alpha-Divergence This repository contains our PyTorch training code, evaluation code and pretrained model

Facebook Research 87 Oct 10, 2022
Fast, Attemptable Route Planner for Navigation in Known and Unknown Environments

FAR Planner uses a dynamically updated visibility graph for fast replanning. The planner models the environment with polygons and builds a global visi

Fan Yang 346 Dec 30, 2022
Data reduction pipeline for KOALA on the AAT.

KOALA KOALA, the Kilofibre Optical AAT Lenslet Array, is a wide-field, high efficiency, integral field unit used by the AAOmega spectrograph on the 3.

4 Sep 26, 2022
Deep Learning for Computer Vision final project

Deep Learning for Computer Vision final project

grassking100 1 Nov 30, 2021
PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks. Code, based on the PyTorch framework, for reprodu

Asaf 3 Dec 27, 2022
Let's create a tool to convert Thailand budget from PDF to CSV.

thailand-budget-pdf2csv Let's create a tool to convert Thailand Government Budgeting from PDF to CSV! รวมพลัง Dev แปลงงบ จาก PDF สู่ Machine-readable

Kao.Geek 88 Dec 19, 2022