A minimalist environment for decision-making in autonomous driving

Overview

highway-env

build Documentation Status Downloads Codacy Badge Coverage GitHub contributors Environments

A collection of environments for autonomous driving and tactical decision-making tasks


An episode of one of the environments available in highway-env.

Try it on Google Colab! Open In Colab

The environments

Highway

env = gym.make("highway-v0")

In this task, the ego-vehicle is driving on a multilane highway populated with other vehicles. The agent's objective is to reach a high speed while avoiding collisions with neighbouring vehicles. Driving on the right side of the road is also rewarded.


The highway-v0 environment.

A faster variant, highway-fast-v0 is also available, with a degraded simulation accuracy to improve speed for large-scale training.

Merge

env = gym.make("merge-v0")

In this task, the ego-vehicle starts on a main highway but soon approaches a road junction with incoming vehicles on the access ramp. The agent's objective is now to maintain a high speed while making room for the vehicles so that they can safely merge in the traffic.


The merge-v0 environment.

Roundabout

env = gym.make("roundabout-v0")

In this task, the ego-vehicle if approaching a roundabout with flowing traffic. It will follow its planned route automatically, but has to handle lane changes and longitudinal control to pass the roundabout as fast as possible while avoiding collisions.


The roundabout-v0 environment.

Parking

env = gym.make("parking-v0")

A goal-conditioned continuous control task in which the ego-vehicle must park in a given space with the appropriate heading.


The parking-v0 environment.

Intersection

env = gym.make("intersection-v0")

An intersection negotiation task with dense traffic.


The intersection-v0 environment.

Racetrack

env = gym.make("racetrack-v0")

A continuous control task involving lane-keeping and obstacle avoidance.


The racetrack-v0 environment.

Examples of agents

Agents solving the highway-env environments are available in the eleurent/rl-agents and DLR-RM/stable-baselines3 repositories.

See the documentation for some examples and notebooks.

Deep Q-Network


The DQN agent solving highway-v0.

This model-free value-based reinforcement learning agent performs Q-learning with function approximation, using a neural network to represent the state-action value function Q.

Deep Deterministic Policy Gradient


The DDPG agent solving parking-v0.

This model-free policy-based reinforcement learning agent is optimized directly by gradient ascent. It uses Hindsight Experience Replay to efficiently learn how to solve a goal-conditioned task.

Value Iteration


The Value Iteration agent solving highway-v0.

The Value Iteration is only compatible with finite discrete MDPs, so the environment is first approximated by a finite-mdp environment using env.to_finite_mdp(). This simplified state representation describes the nearby traffic in terms of predicted Time-To-Collision (TTC) on each lane of the road. The transition model is simplistic and assumes that each vehicle will keep driving at a constant speed without changing lanes. This model bias can be a source of mistakes.

The agent then performs a Value Iteration to compute the corresponding optimal state-value function.

Monte-Carlo Tree Search

This agent leverages a transition and reward models to perform a stochastic tree search (Coulom, 2006) of the optimal trajectory. No particular assumption is required on the state representation or transition model.


The MCTS agent solving highway-v0.

Installation

pip install highway-env

Usage

import gym
import highway_env

env = gym.make("highway-v0")

done = False
while not done:
    action = ... # Your agent code here
    obs, reward, done, info = env.step(action)
    env.render()

Documentation

Read the documentation online.

Citing

If you use the project in your work, please consider citing it with:

@misc{highway-env,
  author = {Leurent, Edouard},
  title = {An Environment for Autonomous Driving Decision-Making},
  year = {2018},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/eleurent/highway-env}},
}

List of publications & preprints using highway-env (please open a pull request to add missing entries):

PhD theses

Master theses

Comments
  • Scaling to multiple agents

    Scaling to multiple agents

    Thanks for creating this easy to use environment for urban scenarios. I wanted to use this environment for multi agent learning. Currently, single agent learning is supported. Are there any plans for scaling it up for multi agents?

    enhancement 
    opened by Achin17 42
  • Hello!

    Hello!

    Hi, I have some question that if i want to use DQN and DDQN to train the agent in the high_env, which json file should i choose ? I notice that there is a file named "no_dueling.json" in the /scripts/configs/HighwayEnv/agents/DQNAgent, but in the model.py there is not Unknown model type to match with the "no_dueling.json", what should i do ? thanks for you help!

    question 
    opened by zhangxinchen123 23
  • Keep agent in lane in Continuous Action space

    Keep agent in lane in Continuous Action space

    I am playing around with the intersection environment with continuous actions, and I am having a hard time modelling the agent to stay in lane. It would be great to have the information of his angle/offset from the middle of the lane as part of his observation space, and learn the behaviour from there, not purely from the reward function. I am using the Kinematic Observation. For multi-agent.

    Sorry if it was in the documentation already and I did not notice.

    opened by TibiGG 20
  • Questions about parallelization, increasing timescale velocity, custom observations.

    Questions about parallelization, increasing timescale velocity, custom observations.

    Hi @eleurent,

    I'm here again and want to ask you some questions. As I wrote you, two, maybe three weeks ago, I'm building a custom parking environment. I have worked on this project with Unity (ML-agents) before starting with gym/highway-env, due to some aspects that makes it unsuitable for the continuation of my work. In ML-agents you could speed up a bit the training phase with the parallelization of the environments and speeding up the simulation time, defining the timescale as you wanted. I was wondering if even here you can do that, I didn't find much about this topic.

    The last question is about the observation, basically I want to do a porting of the reward function (which is not the problem) and observations since I have achieved some good results, I want to use multiple observations, for example lidar or greyscale and add them some features to observe from the environment, have you tryed that yet?

    Thanks in advance. Regards, Andrea

    opened by Andrealberti 20
  • Training with DQN

    Training with DQN

    Hello, thank you for sharing this great job. I am trying to replicate the behaviour shown in the examples (Deep Q-Network). Have you trained with the network provided in the rl-agents? I have tried it with 1000 episodes and when I test it, the agent only moves to the right. Maybe more episodes are needed.

    Thank you in advance.

    opened by rodrigogutierrezm 17
  • Errors while setting up

    Errors while setting up

    Hi Eleurent,

    Thanks for the amazing repository.

    When I was trying to build the project, I ended up in below errors could you please help me to fix.

    1. install with Python3 br /> When I try with pip3 install --user git+https://github.com/eleurent/highway-env I am getting the below error and installation is not successful.
    Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-sir6pe48/matplotlib/
    
    1. When I try pip install --user git+https://github.com/eleurent/highway-env (which does the installation on Python2.7) installation is successful. However I am not able to import the highway_env
    Python 2.7.12 (default, Nov 12 2018, 14:36:49) 
    [GCC 5.4.0 20160609] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import highway_env
    pygame 1.9.4
    Hello from the pygame community. https://www.pygame.org/contribute.html
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/kishor/.local/lib/python2.7/site-packages/highway_env/__init__.py", line 2, in <module>
        import highway_env.envs
      File "/home/kishor/.local/lib/python2.7/site-packages/highway_env/envs/__init__.py", line 1, in <module>
        from highway_env.envs.highway_env import *
    ImportError: No module named envs.highway_env
    >>> 
    

    Thank you.

    opened by kk2491 15
  • Some questions about changing policies and observations

    Some questions about changing policies and observations

    Hi, I tried to run and make some changes to the "highway-v0" environment (i.e. no right overtake, safety distance and more...). I now have a question about training. At the moment the model structure is as follows:

    model = DQN('MlpPolicy', env, gamma=0.8, learning_rate=5e-4, buffer_size=50000, exploration_fraction=0.1,
                exploration_final_eps=0.5, exploration_initial_eps=1.0, batch_size=32, double_q=True,
                target_network_update_freq=50, prioritized_replay=True, verbose=1, tensorboard_log="./dqn_two_lane_tensorboard/")
    

    and observation type is Kinematics. Results, after training sessions of 300000 steps are fluctuating, also adding layers to Mlp (64, 64, 64, 32, 20) which seems not to add anything to the standard Mlp. So I tried to use Grayscale observation and CnnPolicy, to see if there would be a performance improvement. Here is the code:

    model = DQN('CnnPolicy', env, gamma=0.8, learning_rate=5e-4, buffer_size=50000, exploration_fraction=0.1,
                exploration_final_eps=0.5, exploration_initial_eps=1.0, batch_size=32, double_q=True,
                target_network_update_freq=50, prioritized_replay=True, verbose=1, tensorboard_log="./dqn_two_lane_tensorboard/")
    
    "offscreen_rendering": True,
    "observation": {
        "type": "GrayscaleObservation",
        "weights": [0.2989, 0.5870, 0.1140],  # weights for RGB conversion
        "stack_size": 4,
        "observation_shape": (screen_width, screen_height)
    },
    "screen_width": screen_width,
    "screen_height": screen_height,
    "scaling": 1.75,
    "policy_frequency": 2,
    

    The training starts with no errors, but after some steps (around 4000) it crashes due to occupation of all the RAM memory. I tried to reduce batch size (up to 16) and screen width and height (up to 84x84 which is really small) but it doesn't change anything.

    My PC specs are the following: GPU model: NVIDIA Quadro RTX 4000 CUDA version: 10.1 RAM: 32 GB

    My question is if there is something I'm missing that causes the RAM saturation and, mostly, if using Cnn + Grayscale observation would actually result in a performance improvement or if it's a waste of time. Thanks in advance for your help

    opened by lucalazzaroni 13
  • Preventing vehicle from going off-road?

    Preventing vehicle from going off-road?

    I think having a signal indicating the vehicle has gone off the road is useful, and one can terminate the episode with the use of this and shorten the training time for continuous action environments.

    Of course, an alternative is to use a time step limit, but even with that limit put on, you can have a much more informative episode on the road, rather than off-road, in limited time steps.

    opened by mhtb32 13
  • The install of the project

    The install of the project

    Hello! When i run the command in the terminal , the terminal tell me successfully,but i can't find the project in my computer, how can i do next ? Thanks for your reply!!

    opened by zhangxinchen123 12
  • Training highway-v0 with DQN, the trained result is not as good as the one presented in the example video

    Training highway-v0 with DQN, the trained result is not as good as the one presented in the example video

    1. example code

    import gym import highway_env from stable_baselines import DQN model = DQN('MlpPolicy', "highway-fast-v0", policy_kwargs=dict(net_arch=[256, 256]), learning_rate=5e-4, buffer_size=15000, learning_starts=200, batch_size=32, gamma=0.8, train_freq=1, gradient_steps=1, target_update_interval=50, verbose=1, tensorboard_log="highway_dqn/") model.learn(int(2e4)) model.save("highway_dqn/model")

    Question: 1、highway-fast-v0 does not seem to exist 2、the result is not good as example 3、These parameters, such as gradient_steps, target_update_interval, do not exist in stable_baselines.DQN

    opened by limeng-1234 11
  • Separates collision with obstacle and collision with vehicle

    Separates collision with obstacle and collision with vehicle

    For some applications(like what I have in mind), it may be useful to separate vehicle to vehicle collision and vehicle to obstacle collision.

    Please let me know if you think there is something wrong with these changes.

    opened by mhtb32 11
  • How do I get the real-time location of the created vehicle in a custom environment

    How do I get the real-time location of the created vehicle in a custom environment

    Dear author I want to produce a landmark that changes with the state of the vehicle, but I don't know how to get the current location of the vehicle when defining the environment. In the image below, I want the landmark to always appear between car 3 and car 4, but I don't know how to get the real-time location of car 3 and car 4。Could you tell me what to do? Looking forward to your reply.

    image

    opened by lostboy233333 0
  • Save manual control actions

    Save manual control actions

    Hello, author! I have a question. when i set the manual control is true, i would like to record discrete actions. How can I do this please? I look forward to your reply!

    opened by tingtingLiuLiu 0
  • Some angles of the arc road cannot be drawn

    Some angles of the arc road cannot be drawn

    Dear Author, Thank you very much for your wonderful work! I got some problems when I was customizing the environment, hope to get some advice from you~ I found that in some cases the arc road could not be drawn. For example, in the following picture, the arc segment with an Angle of 180 to 90 could not be drawn, so my road could not be closed. I wonder if you have any good solutions? Thank you very much for taking the time to answer my questions. image_4

    opened by lostboy233333 4
  • Training Parking_her does not work.

    Training Parking_her does not work.

    Good afternoon, I was trying to train a policy for the parking-env to test against safety validation methods. When I tried to run the code on colab as it is, I was getting an error when creating the environment. AttributeError: 'ParkingEnv' object has no attribute 'np_random' This error could be solved by reinstalling the highway-env or initially installing an older version of gym and highway. After doing this, an error occurs in creating the model before training. TypeError: init() got an unexpected keyword argument 'create_eval_env'

    It would be much appreciated if you have any insight on how to solve this problem. My research focuses more on the verification side than training or developing a controller to test. I don't have as much experience in training controllers with RL.

    Training_Code_Error_Parking

    opened by JoshYank 7
  • [Feature] Allow `config` to be set in `reset(options=...)`

    [Feature] Allow `config` to be set in `reset(options=...)`

    Thanks for the project, it is very high quality. Would it be possible for the configures, in particular, .config to be set at the option parameter in reset as this is the spirit of the API?

    Im happy to add the PR if you are interested

    opened by pseudo-rnd-thoughts 1
Releases(v1.7.1)
  • v1.7.1(Dec 19, 2022)

  • v1.7(Nov 6, 2022)

  • v1.6(Aug 14, 2022)

    • fix a bug in generating discrete actions from continuous actions
    • fix more bugs related to changes in gym's latest versions
    • new intersection-env variant with continuous actions
    • add longitudinal/lateral/angular offsets to the lane as part of the kinematics observation's features
    • add more configurable options for reward function and termination conditions
    • add configurable min/max speed for continuous actions
    • bug fix for reward computation in the multi-agent setting
    • add get_available_actions for MultiAgentAction
    • fix various deprecation warnings
    • add a multi-objective version of HighwayEnv

    Huge thanks to contributors @zerongxi, @TibiGG, @KexianShen, @lorandcheng

    Source code(tar.gz)
    Source code(zip)
  • v1.5(Mar 19, 2022)

    • Add documentation on continuous actions
    • Fix various bugs or imprecision in collision checks and obstacles rendering
    • Image observations are now centered on the observer vehicle
    • Fix the lane change behaviour in some situations
    • Add TupleObservation, which is a union of several observation types
    • Improve the accuracy of the LidarObservation
    • Add support for PolyLane, and methods to save/load road networks from a config
    • Fix steering wheel / angle conversion
    • Change of the velocity term projection in the reward function
    • Add support for latest gym versions (>=0.22) which dropped the Monitor wrapper
    • Add a copy of the GoalEnv interface which was removed from gym
    Source code(tar.gz)
    Source code(zip)
  • v1.4(Sep 21, 2021)

    This release introduces additional content:

    • a new continuous control environment, racetrack-v0, where the agent must learn to steer and follow the tracks, while avoiding other vehicles
    • a new "on_road" layer in the OccupancyGrid observation type, which enables the observer to see the drivable space
    • a new "align_to_vehicle_axes" option in the OccupancyGrid observation type, which renders the observation in the local vehicle frame
    • a new DiscreteAction action type, which discretizes the original ContinuousAction type. This allows to do low-level control, but with a small discrete action space (e.g. for DQN). Note that this is different from the DiscreteMetaAction type, which implements its own low-level sub-policies.
    • new example scripts and notebooks for training agents, such as a PPO continuous control policy for racetrack-v0.
    • updated documentation
    Source code(tar.gz)
    Source code(zip)
  • v1.3(Aug 30, 2021)

    This release contains

    • A few fixes for compatibility with SB3
    • Some changes for video rendering and framerate
    • highway-fast-v0: a faster variant of highway-v0 to train/debug models more quickly
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Apr 29, 2021)

  • v1.1(Mar 12, 2021)

Owner
Edouard Leurent
Research Scientist @DeepMind
Edouard Leurent
Certis - Certis, A High-Quality Backtesting Engine

Certis - Backtesting For y'all Certis is a powerful, lightweight, simple backtes

Yeachan-Heo 46 Oct 30, 2022
DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs

DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs Abstract: Image-to-image translation has recently achieved re

yaxingwang 23 Apr 14, 2022
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 04, 2023
A Python library that enables ML teams to share, load, and transform data in a collaborative, flexible, and efficient way :chestnut:

Squirrel Core Share, load, and transform data in a collaborative, flexible, and efficient way What is Squirrel? Squirrel is a Python library that enab

Merantix Momentum 249 Dec 07, 2022
Main repository for the HackBio'2021 Virtual Internship Experience for #Team-Greider ❤️

Hello 🤟 #Team-Greider The team of 20 people for HackBio'2021 Virtual Bioinformatics Internship 💝 🖨️ 👨‍💻 HackBio: https://thehackbio.com 💬 Ask us

Siddhant Sharma 7 Oct 20, 2022
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

Adversarially-Robust-Periphery Code + Data from the paper "Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks" by A

Anne Harrington 2 Feb 07, 2022
Compares various time-series feature sets on computational performance, within-set structure, and between-set relationships.

feature-set-comp Compares various time-series feature sets on computational performance, within-set structure, and between-set relationships. Reposito

Trent Henderson 7 May 25, 2022
Benchmark for Answering Existential First Order Queries with Single Free Variable

EFO-1-QA Benchmark for First Order Query Estimation on Knowledge Graphs This repository contains an entire pipeline for the EFO-1-QA benchmark. EFO-1

HKUST-KnowComp 14 Oct 24, 2022
Implementation of Nalbach et al. 2017 paper.

Deep Shading Convolutional Neural Networks for Screen-Space Shading Our project is based on Nalbach et al. 2017 paper. In this project, a set of buffe

Marcel Santana 17 Sep 08, 2022
Bare bones use-case for deploying a containerized web app (built in streamlit) on AWS.

Containerized Streamlit web app This repository is featured in a 3-part series on Deploying web apps with Streamlit, Docker, and AWS. Checkout the blo

Collin Prather 62 Jan 02, 2023
Pocsploit is a lightweight, flexible and novel open source poc verification framework

Pocsploit is a lightweight, flexible and novel open source poc verification framework

cckuailong 208 Dec 24, 2022
Text to image synthesis using thought vectors

Text To Image Synthesis Using Thought Vectors This is an experimental tensorflow implementation of synthesizing images from captions using Skip Though

Paarth Neekhara 2.1k Jan 05, 2023
A project which aims to protect your privacy using inexpensive hardware and easily modifiable software

Protecting your privacy using an ESP32, an IR sensor and a python script This project, which I personally call the "never-gonna-catch-me-in-the-act-ev

8 Oct 10, 2022
A Strong Baseline for Image Semantic Segmentation

A Strong Baseline for Image Semantic Segmentation Introduction This project is an open source semantic segmentation toolbox based on PyTorch. It is ba

Clark He 49 Sep 20, 2022
Prevent `CUDA error: out of memory` in just 1 line of code.

🐨 Koila Koila solves CUDA error: out of memory error painlessly. Fix it with just one line of code, and forget it. 🚀 Features 🙅 Prevents CUDA error

RenChu Wang 1.7k Jan 02, 2023
source code for https://arxiv.org/abs/2005.11248 "Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics"

Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics This work will be published in Nature Biomedical

International Business Machines 71 Nov 15, 2022
The original weights of some Caffe models, ported to PyTorch.

pytorch-caffe-models This repo contains the original weights of some Caffe models, ported to PyTorch. Currently there are: GoogLeNet (Going Deeper wit

Katherine Crowson 9 Nov 04, 2022
The modify PyTorch version of Siam-trackers which are speed-up by TensorRT.

SiamTracker-with-TensorRT The modify PyTorch version of Siam-trackers which are speed-up by TensorRT or ONNX. [Updating...] Examples demonstrating how

9 Dec 13, 2022
Learning to Estimate Hidden Motions with Global Motion Aggregation

Learning to Estimate Hidden Motions with Global Motion Aggregation (GMA) This repository contains the source code for our paper: Learning to Estimate

Shihao Jiang (Zac) 221 Dec 18, 2022