A minimalist environment for decision-making in autonomous driving

Overview

highway-env

build Documentation Status Downloads Codacy Badge Coverage GitHub contributors Environments

A collection of environments for autonomous driving and tactical decision-making tasks


An episode of one of the environments available in highway-env.

Try it on Google Colab! Open In Colab

The environments

Highway

env = gym.make("highway-v0")

In this task, the ego-vehicle is driving on a multilane highway populated with other vehicles. The agent's objective is to reach a high speed while avoiding collisions with neighbouring vehicles. Driving on the right side of the road is also rewarded.


The highway-v0 environment.

A faster variant, highway-fast-v0 is also available, with a degraded simulation accuracy to improve speed for large-scale training.

Merge

env = gym.make("merge-v0")

In this task, the ego-vehicle starts on a main highway but soon approaches a road junction with incoming vehicles on the access ramp. The agent's objective is now to maintain a high speed while making room for the vehicles so that they can safely merge in the traffic.


The merge-v0 environment.

Roundabout

env = gym.make("roundabout-v0")

In this task, the ego-vehicle if approaching a roundabout with flowing traffic. It will follow its planned route automatically, but has to handle lane changes and longitudinal control to pass the roundabout as fast as possible while avoiding collisions.


The roundabout-v0 environment.

Parking

env = gym.make("parking-v0")

A goal-conditioned continuous control task in which the ego-vehicle must park in a given space with the appropriate heading.


The parking-v0 environment.

Intersection

env = gym.make("intersection-v0")

An intersection negotiation task with dense traffic.


The intersection-v0 environment.

Racetrack

env = gym.make("racetrack-v0")

A continuous control task involving lane-keeping and obstacle avoidance.


The racetrack-v0 environment.

Examples of agents

Agents solving the highway-env environments are available in the eleurent/rl-agents and DLR-RM/stable-baselines3 repositories.

See the documentation for some examples and notebooks.

Deep Q-Network


The DQN agent solving highway-v0.

This model-free value-based reinforcement learning agent performs Q-learning with function approximation, using a neural network to represent the state-action value function Q.

Deep Deterministic Policy Gradient


The DDPG agent solving parking-v0.

This model-free policy-based reinforcement learning agent is optimized directly by gradient ascent. It uses Hindsight Experience Replay to efficiently learn how to solve a goal-conditioned task.

Value Iteration


The Value Iteration agent solving highway-v0.

The Value Iteration is only compatible with finite discrete MDPs, so the environment is first approximated by a finite-mdp environment using env.to_finite_mdp(). This simplified state representation describes the nearby traffic in terms of predicted Time-To-Collision (TTC) on each lane of the road. The transition model is simplistic and assumes that each vehicle will keep driving at a constant speed without changing lanes. This model bias can be a source of mistakes.

The agent then performs a Value Iteration to compute the corresponding optimal state-value function.

Monte-Carlo Tree Search

This agent leverages a transition and reward models to perform a stochastic tree search (Coulom, 2006) of the optimal trajectory. No particular assumption is required on the state representation or transition model.


The MCTS agent solving highway-v0.

Installation

pip install highway-env

Usage

import gym
import highway_env

env = gym.make("highway-v0")

done = False
while not done:
    action = ... # Your agent code here
    obs, reward, done, info = env.step(action)
    env.render()

Documentation

Read the documentation online.

Citing

If you use the project in your work, please consider citing it with:

@misc{highway-env,
  author = {Leurent, Edouard},
  title = {An Environment for Autonomous Driving Decision-Making},
  year = {2018},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/eleurent/highway-env}},
}

List of publications & preprints using highway-env (please open a pull request to add missing entries):

PhD theses

Master theses

Comments
  • Scaling to multiple agents

    Scaling to multiple agents

    Thanks for creating this easy to use environment for urban scenarios. I wanted to use this environment for multi agent learning. Currently, single agent learning is supported. Are there any plans for scaling it up for multi agents?

    enhancement 
    opened by Achin17 42
  • Hello!

    Hello!

    Hi, I have some question that if i want to use DQN and DDQN to train the agent in the high_env, which json file should i choose ? I notice that there is a file named "no_dueling.json" in the /scripts/configs/HighwayEnv/agents/DQNAgent, but in the model.py there is not Unknown model type to match with the "no_dueling.json", what should i do ? thanks for you help!

    question 
    opened by zhangxinchen123 23
  • Keep agent in lane in Continuous Action space

    Keep agent in lane in Continuous Action space

    I am playing around with the intersection environment with continuous actions, and I am having a hard time modelling the agent to stay in lane. It would be great to have the information of his angle/offset from the middle of the lane as part of his observation space, and learn the behaviour from there, not purely from the reward function. I am using the Kinematic Observation. For multi-agent.

    Sorry if it was in the documentation already and I did not notice.

    opened by TibiGG 20
  • Questions about parallelization, increasing timescale velocity, custom observations.

    Questions about parallelization, increasing timescale velocity, custom observations.

    Hi @eleurent,

    I'm here again and want to ask you some questions. As I wrote you, two, maybe three weeks ago, I'm building a custom parking environment. I have worked on this project with Unity (ML-agents) before starting with gym/highway-env, due to some aspects that makes it unsuitable for the continuation of my work. In ML-agents you could speed up a bit the training phase with the parallelization of the environments and speeding up the simulation time, defining the timescale as you wanted. I was wondering if even here you can do that, I didn't find much about this topic.

    The last question is about the observation, basically I want to do a porting of the reward function (which is not the problem) and observations since I have achieved some good results, I want to use multiple observations, for example lidar or greyscale and add them some features to observe from the environment, have you tryed that yet?

    Thanks in advance. Regards, Andrea

    opened by Andrealberti 20
  • Training with DQN

    Training with DQN

    Hello, thank you for sharing this great job. I am trying to replicate the behaviour shown in the examples (Deep Q-Network). Have you trained with the network provided in the rl-agents? I have tried it with 1000 episodes and when I test it, the agent only moves to the right. Maybe more episodes are needed.

    Thank you in advance.

    opened by rodrigogutierrezm 17
  • Errors while setting up

    Errors while setting up

    Hi Eleurent,

    Thanks for the amazing repository.

    When I was trying to build the project, I ended up in below errors could you please help me to fix.

    1. install with Python3 br /> When I try with pip3 install --user git+https://github.com/eleurent/highway-env I am getting the below error and installation is not successful.
    Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-sir6pe48/matplotlib/
    
    1. When I try pip install --user git+https://github.com/eleurent/highway-env (which does the installation on Python2.7) installation is successful. However I am not able to import the highway_env
    Python 2.7.12 (default, Nov 12 2018, 14:36:49) 
    [GCC 5.4.0 20160609] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import highway_env
    pygame 1.9.4
    Hello from the pygame community. https://www.pygame.org/contribute.html
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/kishor/.local/lib/python2.7/site-packages/highway_env/__init__.py", line 2, in <module>
        import highway_env.envs
      File "/home/kishor/.local/lib/python2.7/site-packages/highway_env/envs/__init__.py", line 1, in <module>
        from highway_env.envs.highway_env import *
    ImportError: No module named envs.highway_env
    >>> 
    

    Thank you.

    opened by kk2491 15
  • Some questions about changing policies and observations

    Some questions about changing policies and observations

    Hi, I tried to run and make some changes to the "highway-v0" environment (i.e. no right overtake, safety distance and more...). I now have a question about training. At the moment the model structure is as follows:

    model = DQN('MlpPolicy', env, gamma=0.8, learning_rate=5e-4, buffer_size=50000, exploration_fraction=0.1,
                exploration_final_eps=0.5, exploration_initial_eps=1.0, batch_size=32, double_q=True,
                target_network_update_freq=50, prioritized_replay=True, verbose=1, tensorboard_log="./dqn_two_lane_tensorboard/")
    

    and observation type is Kinematics. Results, after training sessions of 300000 steps are fluctuating, also adding layers to Mlp (64, 64, 64, 32, 20) which seems not to add anything to the standard Mlp. So I tried to use Grayscale observation and CnnPolicy, to see if there would be a performance improvement. Here is the code:

    model = DQN('CnnPolicy', env, gamma=0.8, learning_rate=5e-4, buffer_size=50000, exploration_fraction=0.1,
                exploration_final_eps=0.5, exploration_initial_eps=1.0, batch_size=32, double_q=True,
                target_network_update_freq=50, prioritized_replay=True, verbose=1, tensorboard_log="./dqn_two_lane_tensorboard/")
    
    "offscreen_rendering": True,
    "observation": {
        "type": "GrayscaleObservation",
        "weights": [0.2989, 0.5870, 0.1140],  # weights for RGB conversion
        "stack_size": 4,
        "observation_shape": (screen_width, screen_height)
    },
    "screen_width": screen_width,
    "screen_height": screen_height,
    "scaling": 1.75,
    "policy_frequency": 2,
    

    The training starts with no errors, but after some steps (around 4000) it crashes due to occupation of all the RAM memory. I tried to reduce batch size (up to 16) and screen width and height (up to 84x84 which is really small) but it doesn't change anything.

    My PC specs are the following: GPU model: NVIDIA Quadro RTX 4000 CUDA version: 10.1 RAM: 32 GB

    My question is if there is something I'm missing that causes the RAM saturation and, mostly, if using Cnn + Grayscale observation would actually result in a performance improvement or if it's a waste of time. Thanks in advance for your help

    opened by lucalazzaroni 13
  • Preventing vehicle from going off-road?

    Preventing vehicle from going off-road?

    I think having a signal indicating the vehicle has gone off the road is useful, and one can terminate the episode with the use of this and shorten the training time for continuous action environments.

    Of course, an alternative is to use a time step limit, but even with that limit put on, you can have a much more informative episode on the road, rather than off-road, in limited time steps.

    opened by mhtb32 13
  • The install of the project

    The install of the project

    Hello! When i run the command in the terminal , the terminal tell me successfully,but i can't find the project in my computer, how can i do next ? Thanks for your reply!!

    opened by zhangxinchen123 12
  • Training highway-v0 with DQN, the trained result is not as good as the one presented in the example video

    Training highway-v0 with DQN, the trained result is not as good as the one presented in the example video

    1. example code

    import gym import highway_env from stable_baselines import DQN model = DQN('MlpPolicy', "highway-fast-v0", policy_kwargs=dict(net_arch=[256, 256]), learning_rate=5e-4, buffer_size=15000, learning_starts=200, batch_size=32, gamma=0.8, train_freq=1, gradient_steps=1, target_update_interval=50, verbose=1, tensorboard_log="highway_dqn/") model.learn(int(2e4)) model.save("highway_dqn/model")

    Question: 1、highway-fast-v0 does not seem to exist 2、the result is not good as example 3、These parameters, such as gradient_steps, target_update_interval, do not exist in stable_baselines.DQN

    opened by limeng-1234 11
  • Separates collision with obstacle and collision with vehicle

    Separates collision with obstacle and collision with vehicle

    For some applications(like what I have in mind), it may be useful to separate vehicle to vehicle collision and vehicle to obstacle collision.

    Please let me know if you think there is something wrong with these changes.

    opened by mhtb32 11
  • How do I get the real-time location of the created vehicle in a custom environment

    How do I get the real-time location of the created vehicle in a custom environment

    Dear author I want to produce a landmark that changes with the state of the vehicle, but I don't know how to get the current location of the vehicle when defining the environment. In the image below, I want the landmark to always appear between car 3 and car 4, but I don't know how to get the real-time location of car 3 and car 4。Could you tell me what to do? Looking forward to your reply.

    image

    opened by lostboy233333 0
  • Save manual control actions

    Save manual control actions

    Hello, author! I have a question. when i set the manual control is true, i would like to record discrete actions. How can I do this please? I look forward to your reply!

    opened by tingtingLiuLiu 0
  • Some angles of the arc road cannot be drawn

    Some angles of the arc road cannot be drawn

    Dear Author, Thank you very much for your wonderful work! I got some problems when I was customizing the environment, hope to get some advice from you~ I found that in some cases the arc road could not be drawn. For example, in the following picture, the arc segment with an Angle of 180 to 90 could not be drawn, so my road could not be closed. I wonder if you have any good solutions? Thank you very much for taking the time to answer my questions. image_4

    opened by lostboy233333 4
  • Training Parking_her does not work.

    Training Parking_her does not work.

    Good afternoon, I was trying to train a policy for the parking-env to test against safety validation methods. When I tried to run the code on colab as it is, I was getting an error when creating the environment. AttributeError: 'ParkingEnv' object has no attribute 'np_random' This error could be solved by reinstalling the highway-env or initially installing an older version of gym and highway. After doing this, an error occurs in creating the model before training. TypeError: init() got an unexpected keyword argument 'create_eval_env'

    It would be much appreciated if you have any insight on how to solve this problem. My research focuses more on the verification side than training or developing a controller to test. I don't have as much experience in training controllers with RL.

    Training_Code_Error_Parking

    opened by JoshYank 7
  • [Feature] Allow `config` to be set in `reset(options=...)`

    [Feature] Allow `config` to be set in `reset(options=...)`

    Thanks for the project, it is very high quality. Would it be possible for the configures, in particular, .config to be set at the option parameter in reset as this is the spirit of the API?

    Im happy to add the PR if you are interested

    opened by pseudo-rnd-thoughts 1
Releases(v1.7.1)
  • v1.7.1(Dec 19, 2022)

  • v1.7(Nov 6, 2022)

  • v1.6(Aug 14, 2022)

    • fix a bug in generating discrete actions from continuous actions
    • fix more bugs related to changes in gym's latest versions
    • new intersection-env variant with continuous actions
    • add longitudinal/lateral/angular offsets to the lane as part of the kinematics observation's features
    • add more configurable options for reward function and termination conditions
    • add configurable min/max speed for continuous actions
    • bug fix for reward computation in the multi-agent setting
    • add get_available_actions for MultiAgentAction
    • fix various deprecation warnings
    • add a multi-objective version of HighwayEnv

    Huge thanks to contributors @zerongxi, @TibiGG, @KexianShen, @lorandcheng

    Source code(tar.gz)
    Source code(zip)
  • v1.5(Mar 19, 2022)

    • Add documentation on continuous actions
    • Fix various bugs or imprecision in collision checks and obstacles rendering
    • Image observations are now centered on the observer vehicle
    • Fix the lane change behaviour in some situations
    • Add TupleObservation, which is a union of several observation types
    • Improve the accuracy of the LidarObservation
    • Add support for PolyLane, and methods to save/load road networks from a config
    • Fix steering wheel / angle conversion
    • Change of the velocity term projection in the reward function
    • Add support for latest gym versions (>=0.22) which dropped the Monitor wrapper
    • Add a copy of the GoalEnv interface which was removed from gym
    Source code(tar.gz)
    Source code(zip)
  • v1.4(Sep 21, 2021)

    This release introduces additional content:

    • a new continuous control environment, racetrack-v0, where the agent must learn to steer and follow the tracks, while avoiding other vehicles
    • a new "on_road" layer in the OccupancyGrid observation type, which enables the observer to see the drivable space
    • a new "align_to_vehicle_axes" option in the OccupancyGrid observation type, which renders the observation in the local vehicle frame
    • a new DiscreteAction action type, which discretizes the original ContinuousAction type. This allows to do low-level control, but with a small discrete action space (e.g. for DQN). Note that this is different from the DiscreteMetaAction type, which implements its own low-level sub-policies.
    • new example scripts and notebooks for training agents, such as a PPO continuous control policy for racetrack-v0.
    • updated documentation
    Source code(tar.gz)
    Source code(zip)
  • v1.3(Aug 30, 2021)

    This release contains

    • A few fixes for compatibility with SB3
    • Some changes for video rendering and framerate
    • highway-fast-v0: a faster variant of highway-v0 to train/debug models more quickly
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Apr 29, 2021)

  • v1.1(Mar 12, 2021)

Owner
Edouard Leurent
Research Scientist @DeepMind
Edouard Leurent
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.

ARES This repository contains the code for ARES (Adversarial Robustness Evaluation for Safety), a Python library for adversarial machine learning rese

Tsinghua Machine Learning Group 377 Dec 20, 2022
Detectron2 is FAIR's next-generation platform for object detection and segmentation.

Detectron2 is Facebook AI Research's next generation software system that implements state-of-the-art object detection algorithms. It is a ground-up r

Facebook Research 23.3k Jan 08, 2023
Morphable Detector for Object Detection on Demand

Morphable Detector for Object Detection on Demand (ICCV 2021) PyTorch implementation of the paper Morphable Detector for Object Detection on Demand. I

9 Feb 23, 2022
Boundary-preserving Mask R-CNN (ECCV 2020)

BMaskR-CNN This code is developed on Detectron2 Boundary-preserving Mask R-CNN ECCV 2020 Tianheng Cheng, Xinggang Wang, Lichao Huang, Wenyu Liu Video

Hust Visual Learning Team 178 Nov 28, 2022
Autonomous Ground Vehicle Navigation and Control Simulation Examples in Python

Autonomous Ground Vehicle Navigation and Control Simulation Examples in Python THIS PROJECT IS CURRENTLY A WORK IN PROGRESS AND THUS THIS REPOSITORY I

Joshua Marshall 14 Dec 31, 2022
Deep learning library for solving differential equations and more

DeepXDE Voting on whether we should have a Slack channel for discussion. DeepXDE is a library for scientific machine learning. Use DeepXDE if you need

Lu Lu 1.4k Dec 29, 2022
Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval (NeurIPS'21)

Baleen Baleen is a state-of-the-art model for multi-hop reasoning, enabling scalable multi-hop search over massive collections for knowledge-intensive

Stanford Future Data Systems 22 Dec 05, 2022
Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch

PyVarInf PyVarInf provides facilities to easily train your PyTorch neural network models using variational inference. Bayesian Deep Learning with Vari

342 Dec 02, 2022
Benchmarking Pipeline for Prediction of Protein-Protein Interactions

B4PPI Benchmarking Pipeline for the Prediction of Protein-Protein Interactions How this benchmarking pipeline has been built, and how to use it, is de

Loïc Lannelongue 4 Jun 27, 2022
Eff video representation - Efficient video representation through neural fields

Neural Residual Flow Fields for Efficient Video Representations 1. Download MPI

41 Jan 06, 2023
BEAMetrics: Benchmark to Evaluate Automatic Metrics in Natural Language Generation

BEAMetrics: Benchmark to Evaluate Automatic Metrics in Natural Language Generation Installing The Dependencies $ conda create --name beametrics python

7 Jul 04, 2022
The implement of papar "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization"

SIGIR2021-EGLN The implement of paper "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization" Neural graph based Col

15 Dec 27, 2022
Time Dependent DFT in Tamm-Dancoff Approximation

Density Function Theory Program - kspy-tddft(tda) This is an implementation of Time-Dependent Density Functional Theory(TDDFT) using the Tamm-Dancoff

Peter Borthwick 2 Nov 17, 2022
Vehicles Counting using YOLOv4 + DeepSORT + Flask + Ngrok

A project for counting vehicles using YOLOv4 + DeepSORT + Flask + Ngrok

Duong Tran Thanh 37 Dec 16, 2022
Procedural 3D data generation pipeline for architecture

Synthetic Dataset Generator Authors: Stanislava Fedorova Alberto Tono Meher Shashwat Nigam Jiayao Zhang Amirhossein Ahmadnia Cecilia bolognesi Dominik

Computational Design Institute 49 Nov 25, 2022
Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR, 2019)

Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR 2019) To make better use of given limited labels, we propo

126 Sep 13, 2022
Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction

This is a fork of Fairseq(-py) with implementations of the following models: Pervasive Attention - 2D Convolutional Neural Networks for Sequence-to-Se

Maha 490 Dec 15, 2022
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
An exploration of log domain "alternative floating point" for hardware ML/AI accelerators.

This repository contains the SystemVerilog RTL, C++, HLS (Intel FPGA OpenCL to wrap RTL code) and Python needed to reproduce the numerical results in

Facebook Research 373 Dec 31, 2022
Official implementation of the ICCV 2021 paper "Joint Inductive and Transductive Learning for Video Object Segmentation"

JOINT This is the official implementation of Joint Inductive and Transductive learning for Video Object Segmentation, to appear in ICCV 2021. @inproce

Yunyao 35 Oct 16, 2022