A generalized framework for prototyping full-stack cooperative driving automation applications under CARLA+SUMO.

Overview

OpenCDA

Build Status Coverage Status Documentation Status

OpenCDA is a SIMULATION tool integrated with a prototype cooperative driving automation (CDA; see SAE J3216) pipeline as well as regular automated driving components (e.g., perception, localization, planning, control). The tool integrates automated driving simulation (CARLA), traffic simulation (SUMO), and Co-simulation (CARLA + SUMO).

OpenCDA is all in Python. The purpose is to enable researchers to fast-prototype, simulate, and test CDA algorithms and functions. By applying our simulation tool, users can conveniently conduct both task-specific evaluation (e.g. object detection accuracy) and pipeline-level assessment (e.g. traffic safety) on their customized algorithms.

In collaboration with U.S.DOT CDA Research and the FHWA CARMA Program, OpenCDA, as an open-source project, makes a unique contribution from the perspective of initial-stage development and testing using simulation. OpenCDA is designed and built to support initial algorithmic testing for CDA Features. Through collaboration with CARMA Collaborative, this tool provides a unique capability to the CDA research community and will interface with the CARMA XiL tools being developed by U.S.DOT to support more advanced simulation testing of CDA Features.

The key features of OpenCDA are:

  • Integration: OpenCDA utilizes CARLA and SUMO separately, as well as integrates them together for realistic scene rendering, vehicle modeling, and traffic simulation.
  • Full-stack prototype CDA Platform in Simulation: OpenCDA provides a simple prototype automated driving and cooperative driving platform, all in Python, that contains perception, localization, planning, control, and V2X communication modules.
  • Modularity: OpenCDA is highly modularized, enabling users to conveniently replace any default algorithms or protocols with their own customzied design.
  • Benchmark: OpenCDA offers benchmark testing scenarios, benchmark baseline maps, state-of-the-art benchmark algorithms for ADS and Cooperative ADS functions, and benchmark evaluation metrics.
  • Connectivity and Cooperation: OpenCDA supports various levels and categories of cooperation between CAVs in simulation. This differentiates OpenCDA from other single vehicle simulation tools.

Users could refer to OpenCDA documentation for more details.

Major Components

teaser

OpenCDA consists of three major component: Cooperative Driving System, Co-Simulation Tools, and Scenario Manager.

Check the OpenCDA Introduction for more details.

Citation

If you are using our OpenCDA framework or codes for your development, please cite the following paper:

@inproceedings{xu2021opencda,
title={OpenCDA:  An  Open  Cooperative  Driving  Automation  Framework
Integrated  with  Co-Simulation},
author={Runsheng Xu, Yi Guo, Xu Han, Xin Xia, Hao Xiang, Jiaqi Ma},
booktitle={2021 IEEE Intelligent Transportation Systems Conference (ITSC)},
year={2021}
}

The arxiv link to the paper: https://arxiv.org/abs/2107.06260

Also, under this LICENSE, OpenCDA is for non-commercial research only. Researchers can modify the source code for their own research only. Contracted work that generates corporate revenues and other general commercial use are prohibited under this LICENSE. See the LICENSE file for details and possible opportunities for commercial use.

Get Started

teaser

Users Guide

Note: We continuously improve the performance of OpenCDA. Currently, it is mainly tested in our customized maps and Carla town06 map; therefore, we DO NOT guarantee the same level of robustness in other maps.

Developer Guide

Contribution Rule

We welcome your contributions.

  • Please report bugs and improvements by submitting issues.
  • Submit your contributions using pull requests. Please use this template for your pull requests.

In OpenCDA v0.1.0 Release

The current version features the following:

  • OpenCDA v0.1.0 software stack (basic ADS and cooperative ADS platform, benchmark algorithms for platooning, cooperative lane change, merge, and other freeway maneuvers)
  • CARLA only simulation
  • Co-Simulation function with CARLA + SUMO
  • Scenario manager and scenario database for CDA freeway applications

In Future Releases

Future versions are expected to include the following:

  • OpenCDA v0.2.0 and above software stack, including signalized intersection and corridor applications, cooperative perception and localization, enhanced scenario generation/manager and scenario database for newly added CDA applications)
  • SUMO only simulation which includes SUMO impplementation of all cooperative driving applications using behavior based approach (consistent with CARLA implementation)
  • Software-in-the-loop interfaces with two open-source ADS platforms, i.e., Autoware and CARMA
  • hardware-in-the-loop interfaces and example projects with a real automated driving vehicle platform and a driving simulator

Contributors

OpenCDA is supported by the UCLA Mobility Lab.

Lab Principal Investigator:

Project Lead:

Team Members:

Comments
  • Spawn a new CAV at a certain simulation time step

    Spawn a new CAV at a certain simulation time step

    I was wondering if it is possible to generate a new single CAV on the on-ramp particularly for the scenario "platoon_joining_2lanefree_cosim". I tried to spawn a single cav on the on-ramp but when it reached to the merging area, about the same time as a mainline platoon (and it should perform a cut-in merge). It did not merge into the platoon.

    Please advise if OpenCDA allows us to do this. My intent is to have the simulation run longer with more CAVs. (Spawning multiple CAVs at the simulation start is possible but is limited by space of link.)

    Thank you, Thod

    opened by thuns001 17
  • .py not found ERROR

    .py not found ERROR

    I am trying to run opencda on a remote server with Ubuntu16.04, I had a problem with open3d before, after I solved that problem. I got the following error: image I'm sure I followed the steps in the official documentation, what should I do to fix this error?Thanks! By the way, does opencda support running on a remote server? Carla: 0.9.11 Driver Version: 418.43 CUDA Version: 10.1

    opened by 6Lackiu 15
  • RuntimeError: opendrive could not be correctly parsed

    RuntimeError: opendrive could not be correctly parsed

    Not sure if I missed anything but I cannot get the basic example working.

    OS: Ubuntu 2004 GPU: RTX2080

    Carla itself is working fine.

    Command for starting carla server:

    /opt/carla-simulator/CarlaUE4.sh 
    4.24.3-0+++UE4+Release-4.24 518 0
    Disabling core dumps.
    

    command for starting opencda:

    $ python opencda.py -t single_2lanefree_carla
    OpenCDA Version: 0.1.0
    load opendrive map '2lane_freeway_simplified.xodr'.
    Traceback (most recent call last):
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/single_2lanefree_carla.py", line 35, in run_scenario
        cav_world=cav_world)
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 114, in __init__
        self.world = load_customized_world(xodr_path, self.client)
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/utils/customized_map_api.py", line 54, in load_customized_world
        enable_mesh_visibility=True))
    RuntimeError: opendrive could not be correctly parsed
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "opencda.py", line 56, in <module>
        main()
      File "opencda.py", line 51, in main
        scenario_runner(opt, config_yaml)
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/single_2lanefree_carla.py", line 75, in run_scenario
        eval_manager.evaluate()
    UnboundLocalError: local variable 'eval_manager' referenced before assignment
    
    question 
    opened by yanghao 12
  •  RuntimeError: time-out of 10000ms while waiting for the simulator

    RuntimeError: time-out of 10000ms while waiting for the simulator

    python opencda.py -t platoon_joining_2lanefree_cosim OpenCDA Version: 0.1.0 load opendrive map '2lane_freeway_simplified.xodr'. Traceback (most recent call last): File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/platoon_joining_2lanefree_cosim.py", line 42, in run_scenario sumo_file_parent_path=sumo_cfg) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/cosim_api.py", line 64, in init cav_world) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 114, in init self.world = load_customized_world(xodr_path, self.client) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/customized_map_api.py", line 54, in load_customized_world enable_mesh_visibility=True)) RuntimeError: time-out of 10000ms while waiting for the simulator, make sure the simulator is ready and connected to localhost:2000

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "opencda.py", line 56, in main() File "opencda.py", line 51, in main scenario_runner(opt, config_yaml) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/platoon_joining_2lanefree_cosim.py", line 86, in run_scenario eval_manager.evaluate() UnboundLocalError: local variable 'eval_manager' referenced before assignment

    opened by luckynote 7
  • CARLA installation

    CARLA installation

    I encounter the following issue when installing CARLA with the command 'make launch' (make PythonAPI is successfully compiled):

    8 warnings and 18 errors generated. 5 warnings and 10 errors generated. make[1]: *** [Makefile:315: CarlaUE4Editor] Error 6 make[1]: Leaving directory '/home/admin1/carla/Unreal/CarlaUE4' make: *** [Util/BuildTools/Linux.mk:7: launch] Error 2

    Please help me!!! Many thanks.

    opened by bigbird11 4
  • opencda.py: error: unrecognized arguments: -v 0.9.12

    opencda.py: error: unrecognized arguments: -v 0.9.12

    Hi, when I changed my carla version this error occurred. Is there any mistake in my command?

    (opencda) [email protected]_2019:~/OpenCDA$ python opencda.py -t single_2lanefree_carla -v 0.9.12
    usage: opencda.py [-h] -t TEST_SCENARIO [--record] [--apply_ml]
    opencda.py: error: unrecognized arguments: -v 0.9.12
    
    opened by Sei2112 4
  • OpenCDA能否导入Intereaction数据集,并将数据集中的场景进行仿真及车辆行为分析?

    OpenCDA能否导入Intereaction数据集,并将数据集中的场景进行仿真及车辆行为分析?

    您好,很高兴可以了解OpenCAD。目前我只是拜读了您的文献,还没开始深入学习OpenCDA的具体操作。现在有一些问题想请问:

    1.OpenCDA能否支持导入Interaction数据集,对其进行场景的还原仿真?比如复现地图,汽车驾驶轨迹,行为分析等。导入的过程中是否要对Interaction数据集中的数据类型进行转换?其他数据集呢?(InD数据集等,主要是一些汽车行为与轨迹的数据集) 2.仿真之后如果要对一些行为进行分析,或者加入一些算法进行一些研究(比如,加入LSTM进行轨迹预测,采用MPC控制动力学模型等等),能否将数据结果进行保存或者实现算法开发?

    以上功能的实现,包括了OpenCDA自带的内置功能,或者我也可以自己进行算法编写(只要OpenCDA提供相应接口)。如果可以实现,我将进一步深入学习OpenCDA。

    期待回复

    opened by ShenZC25 3
  • Is CARLA 0.9.9 supported?

    Is CARLA 0.9.9 supported?

    Huge thanks to this great project, it looks amazing! I have a question about the supported version of Carla. I saw on the installation page, both carla 0.9.11 and 0.9.12 are supported, but due to the current projects we have to continue to use the version 0.9.9. Does your project also support carla 0.9.9? If not, would you please provide any ideas on how we could modify this great project so that it could be fitted for carla 0.9.9? Thanks!

    opened by luh-j 3
  • The errors about 'torch.cuda' and  'eval_manager'

    The errors about 'torch.cuda' and 'eval_manager'

    Hello,

    It is really great work! I am interested in co-simulation with sumo. While running it, I have encountered with errors. Could you please help me?

    Kind regards, error

    opened by aslirey 3
  • Ubuntu16.04 can NOT run Two-lane highway test

    Ubuntu16.04 can NOT run Two-lane highway test

    Hi, Thanks for the great work

    I try to run the single_2lanefree_carla on Ubuntu 16.04, but it failed:


    ~/OpenCDA$ python opencda.py -t single_2lanefree_carla OpenCDA Version: 0.1.0 Traceback (most recent call last): File "opencda.py", line 56, in main() File "opencda.py", line 40, in main testing_scenario = importlib.import_module("opencda.scenario_testing.%s" % opt.test_scenario) ... import open3d as o3d File "/home/anaconda3/envs/opencda/lib/python3.7/site-packages/open3d/init.py", line 56, in _CDLL(str(next((_Path(file).parent / 'cpu').glob('pybind*')))) File "/home/anaconda3/envs/opencda/lib/python3.7/ctypes/init.py", line 364, in init self._handle = _dlopen(self._name, mode) OSError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by /home/anaconda3/envs/opencda/lib/python3.7/site-packages/open3d/cpu/pybind.cpython-37m-x86_64-linux-gnu.so)

    I searched in google and found that it maybe the problem of open3d which uses glibc 2.27 Ubuntu16.04 seems to not be supported anymore. Ubuntu 16.04 use only glibc 2.23

    https://github.com/isl-org/Open3D/issues/1898

    so Am I must upgrade my Ubuntu to 18.04?

    opened by CharlesWolff6 3
  • Travis CI: Test on the current versions of Ubuntu and Python

    Travis CI: Test on the current versions of Ubuntu and Python

    Python 3.10 release candidate 1 should be released next week so perhaps it is time to start testing on current Python.

    If tests pass on both Python 3.7 and 3.9, it is almost certain they will also pass on 3.8.

    opened by cclauss 3
  • Running opencda in docker support

    Running opencda in docker support

    This is not a real issue, but just some notes for those who want to running opencda in docker environment.

    1. Base Docker Image: I already have a base docker image(ubuntu 18.04) with carla client lib(0.9.11) installed. ie. import carla will not generate any error messages.
    2. OpenCDA installation: Get a copy of the source code, and mount it to the docker container based on image in the previous step using the docker -v options. So you'll get access to the opencda source in the docker container.
    3. X11 support: using docker run option -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY

    Possible errors:

    1. In the container shell, try to run a scenario, for example. the single_2lanefree_carla, you may get some libs (limSM.so, libGL.so) missing messages. To fix those errors: using sudo apt-get update && sudo apt-get install -y libsm6 libgl1-mesa-glx to install the dependencies.
    2. You may get some errors like "X error: BadShmSeg, blabla", set environment variable using export QT_X11_NO_MITSHM=1 in the container will fix it.

    If you see some other errors, leave a message here, I'll see if I can help.

    opened by jewes 3
Releases(v0.1.2)
  • v0.1.2(Mar 14, 2022)

    Map manager

    OpenCDA now adds a new component map_manager for each cav. It will dynamically load road topology, traffic light information, and dynamic objects information around the ego vehicle and save them into rasterized map, which can be useful for RL planning, HDMap learning, scene understanding, etc. Key elements in the rasterization map:

    • Drivable space colored by black
    • Lanes
      • Red lane: the lanes that are controlled by red traffic lights
      • Green lane: the lanes that are controlled by green traffic lights
      • Yellow lane: the lanes that are not effected by any traffic light
    • Objects are colored by white and represented as rectangles
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Oct 9, 2021)

    Check https://opencda-documentation.readthedocs.io/en/latest/md_files/release_history.html to see more visulizations.


    v0.1.1

    Cooperative Perception

    OpenCDA now supports data dumping simultaneously for multiple CAVs to develop V2V perception algorithms offline. The dumped data includes:

    • LiDAR data
    • RGB camera (4 for each CAV)
    • GPS/IMU
    • Velocity and future planned trajectory of the CAV
    • Surrounding vehicles' bounding box position, velocity

    Run the following script to collect cooperative data: python opencda.py -t cooperception_datadump_town06_carla -v 0.9.12(or 0.9.11)

    Besides the above dumped data, users can also generate the future trajectory for each vehicle for trajectory prediction purpose. Run python root_of_opencda/scripts/generate_prediction_yaml.py to generate the prediction offline.

    This new functionality has been proved helpful. The newest paper OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication has utilized this new feature to collect cooperative data. Check https://mobility-lab.seas.ucla.edu/opv2v/ for more information.

    CARLA 0.9.12 Support

    OpenCDA now supports both CARLA 0.9.12 and 0.9.11. Users needs to set CARLA_VERSION variable before installing OpenCDA. When users run opencda.py, -v argument is required to classify the CARLA version for OpenCDA to select the correct API.

    Weather Parameters

    To help estimate the influence of weather on cooperative driving automation, users now can define weather setting in the yaml file to control sunlight, fog, rain, wetness and other conditions.

    Bug Fixes

    Some minor bugs in the planning module are fixed.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jul 27, 2021)

    The initial release of OpenCDA

    • Integrated with CARLA and Sumo. Supports CARLA only mode and Co-Simulation mode.
    • Provides a full-stack automated driving and cooperative driving software system. that contains perception, localization, planning, control, and V2X communication modules.
    • Default perception, localization, planning, and control algorithms installed
    • Default platooning and cooperative merge algorithms and protocols installed
    • V2X feature supported, allowing simulating communication lagging and noise
    • 10+ testing scenarios were provided.
    • Customized maps were provided for highway testing.
    • Benchmark evaluation measurements provided
    Source code(tar.gz)
    Source code(zip)
Owner
UCLA Mobility Lab
A research lab dedicated to harnessing system theories and tools, such as AI, control, robotics, and optimization for smart vehicles and transportation
UCLA Mobility Lab
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
ilpyt: imitation learning library with modular, baseline implementations in Pytorch

ilpyt The imitation learning toolbox (ilpyt) contains modular implementations of common deep imitation learning algorithms in PyTorch, with unified in

The MITRE Corporation 11 Nov 17, 2022
RipsNet: a general architecture for fast and robust estimation of the persistent homology of point clouds

RipsNet: a general architecture for fast and robust estimation of the persistent homology of point clouds This repository contains the code asscoiated

Felix Hensel 14 Dec 12, 2022
Most popular metrics used to evaluate object detection algorithms.

Most popular metrics used to evaluate object detection algorithms.

Rafael Padilla 4.4k Dec 25, 2022
[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore

[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore. The repository corresponds to the AI6101 of Semester 1, AY2021-2022, starting from 08/2021. The instructors of

AccSrd 1 Sep 22, 2022
Vehicle speed detection with python

Vehicle-speed-detection In the project simulate the tracker.py first then simulate the SpeedDetector.py. Finally, a new window pops up and the output

3 Dec 15, 2022
Keyword-BERT: Keyword-Attentive Deep Semantic Matching

project discription An implementation of the Keyword-BERT model mentioned in my paper Keyword-Attentive Deep Semantic Matching (Plz cite this github r

1 Nov 14, 2021
Implementation of our paper 'RESA: Recurrent Feature-Shift Aggregator for Lane Detection' in AAAI2021.

RESA PyTorch implementation of the paper "RESA: Recurrent Feature-Shift Aggregator for Lane Detection". Our paper has been accepted by AAAI2021. Intro

137 Jan 02, 2023
Code to replicate the key results from Exploring the Limits of Out-of-Distribution Detection

Exploring the Limits of Out-of-Distribution Detection In this repository we're collecting replications for the key experiments in the Exploring the Li

Stanislav Fort 35 Jan 03, 2023
Rotation Robust Descriptors

RoRD Rotation-Robust Descriptors and Orthographic Views for Local Feature Matching Project Page | Paper link Evaluation and Datasets MMA : Training on

Udit Singh Parihar 25 Nov 15, 2022
2021-AIAC-QQ-Browser-Hyperparameter-Optimization-Rank6

2021-AIAC-QQ-Browser-Hyperparameter-Optimization-Rank6

Aigege 8 Mar 31, 2022
'A C2C E-COMMERCE TRUST MODEL BASED ON REPUTATION' Python implementation

Project description A library providing functionalities to calculate reputation and degree of trust on C2C ecommerce platforms. The work is fully base

Davide Bigotti 2 Dec 14, 2022
Grammar Induction using a Template Tree Approach

Gitta Gitta ("Grammar Induction using a Template Tree Approach") is a method for inducing context-free grammars. It performs particularly well on data

Thomas Winters 36 Nov 15, 2022
Performance Analysis of Multi-user NOMA Wireless-Powered mMTC Networks: A Stochastic Geometry Approach

Performance Analysis of Multi-user NOMA Wireless-Powered mMTC Networks: A Stochastic Geometry Approach Thanh Luan Nguyen, Tri Nhu Do, Georges Kaddoum

Thanh Luan Nguyen 2 Oct 10, 2022
Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP

Wav2CLIP 🚧 WIP 🚧 Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP 📄 🔗 Ho-Hsiang Wu, Prem Seetharaman

Descript 240 Dec 13, 2022
A repository with exploration into using transformers to predict DNA ↔ transcription factor binding

Transcription Factor binding predictions with Attention and Transformers A repository with exploration into using transformers to predict DNA ↔ transc

Phil Wang 62 Dec 20, 2022
LIVECell - A large-scale dataset for label-free live cell segmentation

LIVECell dataset This document contains instructions of how to access the data associated with the submitted manuscript "LIVECell - A large-scale data

Sartorius Corporate Research 112 Jan 07, 2023
This repository contains the DendroMap implementation for scalable and interactive exploration of image datasets in machine learning.

DendroMap DendroMap is an interactive tool to explore large-scale image datasets used for machine learning. A deep understanding of your data can be v

DIV Lab 33 Dec 30, 2022
Codebase for the self-supervised goal reaching benchmark introduced in the LEXA paper

LEXA Benchmark Codebase for the self-supervised goal reaching benchmark introduced in the LEXA paper (Discovering and Achieving Goals via World Models

Oleg Rybkin 36 Dec 22, 2022