A minimalist environment for decision-making in autonomous driving

Overview

highway-env

build Documentation Status Downloads Codacy Badge Coverage GitHub contributors Environments

A collection of environments for autonomous driving and tactical decision-making tasks


An episode of one of the environments available in highway-env.

Try it on Google Colab! Open In Colab

The environments

Highway

env = gym.make("highway-v0")

In this task, the ego-vehicle is driving on a multilane highway populated with other vehicles. The agent's objective is to reach a high speed while avoiding collisions with neighbouring vehicles. Driving on the right side of the road is also rewarded.


The highway-v0 environment.

A faster variant, highway-fast-v0 is also available, with a degraded simulation accuracy to improve speed for large-scale training.

Merge

env = gym.make("merge-v0")

In this task, the ego-vehicle starts on a main highway but soon approaches a road junction with incoming vehicles on the access ramp. The agent's objective is now to maintain a high speed while making room for the vehicles so that they can safely merge in the traffic.


The merge-v0 environment.

Roundabout

env = gym.make("roundabout-v0")

In this task, the ego-vehicle if approaching a roundabout with flowing traffic. It will follow its planned route automatically, but has to handle lane changes and longitudinal control to pass the roundabout as fast as possible while avoiding collisions.


The roundabout-v0 environment.

Parking

env = gym.make("parking-v0")

A goal-conditioned continuous control task in which the ego-vehicle must park in a given space with the appropriate heading.


The parking-v0 environment.

Intersection

env = gym.make("intersection-v0")

An intersection negotiation task with dense traffic.


The intersection-v0 environment.

Racetrack

env = gym.make("racetrack-v0")

A continuous control task involving lane-keeping and obstacle avoidance.


The racetrack-v0 environment.

Examples of agents

Agents solving the highway-env environments are available in the eleurent/rl-agents and DLR-RM/stable-baselines3 repositories.

See the documentation for some examples and notebooks.

Deep Q-Network


The DQN agent solving highway-v0.

This model-free value-based reinforcement learning agent performs Q-learning with function approximation, using a neural network to represent the state-action value function Q.

Deep Deterministic Policy Gradient


The DDPG agent solving parking-v0.

This model-free policy-based reinforcement learning agent is optimized directly by gradient ascent. It uses Hindsight Experience Replay to efficiently learn how to solve a goal-conditioned task.

Value Iteration


The Value Iteration agent solving highway-v0.

The Value Iteration is only compatible with finite discrete MDPs, so the environment is first approximated by a finite-mdp environment using env.to_finite_mdp(). This simplified state representation describes the nearby traffic in terms of predicted Time-To-Collision (TTC) on each lane of the road. The transition model is simplistic and assumes that each vehicle will keep driving at a constant speed without changing lanes. This model bias can be a source of mistakes.

The agent then performs a Value Iteration to compute the corresponding optimal state-value function.

Monte-Carlo Tree Search

This agent leverages a transition and reward models to perform a stochastic tree search (Coulom, 2006) of the optimal trajectory. No particular assumption is required on the state representation or transition model.


The MCTS agent solving highway-v0.

Installation

pip install highway-env

Usage

import gym
import highway_env

env = gym.make("highway-v0")

done = False
while not done:
    action = ... # Your agent code here
    obs, reward, done, info = env.step(action)
    env.render()

Documentation

Read the documentation online.

Citing

If you use the project in your work, please consider citing it with:

@misc{highway-env,
  author = {Leurent, Edouard},
  title = {An Environment for Autonomous Driving Decision-Making},
  year = {2018},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/eleurent/highway-env}},
}

List of publications & preprints using highway-env (please open a pull request to add missing entries):

PhD theses

Master theses

Comments
  • Scaling to multiple agents

    Scaling to multiple agents

    Thanks for creating this easy to use environment for urban scenarios. I wanted to use this environment for multi agent learning. Currently, single agent learning is supported. Are there any plans for scaling it up for multi agents?

    enhancement 
    opened by Achin17 42
  • Hello!

    Hello!

    Hi, I have some question that if i want to use DQN and DDQN to train the agent in the high_env, which json file should i choose ? I notice that there is a file named "no_dueling.json" in the /scripts/configs/HighwayEnv/agents/DQNAgent, but in the model.py there is not Unknown model type to match with the "no_dueling.json", what should i do ? thanks for you help!

    question 
    opened by zhangxinchen123 23
  • Keep agent in lane in Continuous Action space

    Keep agent in lane in Continuous Action space

    I am playing around with the intersection environment with continuous actions, and I am having a hard time modelling the agent to stay in lane. It would be great to have the information of his angle/offset from the middle of the lane as part of his observation space, and learn the behaviour from there, not purely from the reward function. I am using the Kinematic Observation. For multi-agent.

    Sorry if it was in the documentation already and I did not notice.

    opened by TibiGG 20
  • Questions about parallelization, increasing timescale velocity, custom observations.

    Questions about parallelization, increasing timescale velocity, custom observations.

    Hi @eleurent,

    I'm here again and want to ask you some questions. As I wrote you, two, maybe three weeks ago, I'm building a custom parking environment. I have worked on this project with Unity (ML-agents) before starting with gym/highway-env, due to some aspects that makes it unsuitable for the continuation of my work. In ML-agents you could speed up a bit the training phase with the parallelization of the environments and speeding up the simulation time, defining the timescale as you wanted. I was wondering if even here you can do that, I didn't find much about this topic.

    The last question is about the observation, basically I want to do a porting of the reward function (which is not the problem) and observations since I have achieved some good results, I want to use multiple observations, for example lidar or greyscale and add them some features to observe from the environment, have you tryed that yet?

    Thanks in advance. Regards, Andrea

    opened by Andrealberti 20
  • Training with DQN

    Training with DQN

    Hello, thank you for sharing this great job. I am trying to replicate the behaviour shown in the examples (Deep Q-Network). Have you trained with the network provided in the rl-agents? I have tried it with 1000 episodes and when I test it, the agent only moves to the right. Maybe more episodes are needed.

    Thank you in advance.

    opened by rodrigogutierrezm 17
  • Errors while setting up

    Errors while setting up

    Hi Eleurent,

    Thanks for the amazing repository.

    When I was trying to build the project, I ended up in below errors could you please help me to fix.

    1. install with Python3 br /> When I try with pip3 install --user git+https://github.com/eleurent/highway-env I am getting the below error and installation is not successful.
    Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-sir6pe48/matplotlib/
    
    1. When I try pip install --user git+https://github.com/eleurent/highway-env (which does the installation on Python2.7) installation is successful. However I am not able to import the highway_env
    Python 2.7.12 (default, Nov 12 2018, 14:36:49) 
    [GCC 5.4.0 20160609] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import highway_env
    pygame 1.9.4
    Hello from the pygame community. https://www.pygame.org/contribute.html
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/kishor/.local/lib/python2.7/site-packages/highway_env/__init__.py", line 2, in <module>
        import highway_env.envs
      File "/home/kishor/.local/lib/python2.7/site-packages/highway_env/envs/__init__.py", line 1, in <module>
        from highway_env.envs.highway_env import *
    ImportError: No module named envs.highway_env
    >>> 
    

    Thank you.

    opened by kk2491 15
  • Some questions about changing policies and observations

    Some questions about changing policies and observations

    Hi, I tried to run and make some changes to the "highway-v0" environment (i.e. no right overtake, safety distance and more...). I now have a question about training. At the moment the model structure is as follows:

    model = DQN('MlpPolicy', env, gamma=0.8, learning_rate=5e-4, buffer_size=50000, exploration_fraction=0.1,
                exploration_final_eps=0.5, exploration_initial_eps=1.0, batch_size=32, double_q=True,
                target_network_update_freq=50, prioritized_replay=True, verbose=1, tensorboard_log="./dqn_two_lane_tensorboard/")
    

    and observation type is Kinematics. Results, after training sessions of 300000 steps are fluctuating, also adding layers to Mlp (64, 64, 64, 32, 20) which seems not to add anything to the standard Mlp. So I tried to use Grayscale observation and CnnPolicy, to see if there would be a performance improvement. Here is the code:

    model = DQN('CnnPolicy', env, gamma=0.8, learning_rate=5e-4, buffer_size=50000, exploration_fraction=0.1,
                exploration_final_eps=0.5, exploration_initial_eps=1.0, batch_size=32, double_q=True,
                target_network_update_freq=50, prioritized_replay=True, verbose=1, tensorboard_log="./dqn_two_lane_tensorboard/")
    
    "offscreen_rendering": True,
    "observation": {
        "type": "GrayscaleObservation",
        "weights": [0.2989, 0.5870, 0.1140],  # weights for RGB conversion
        "stack_size": 4,
        "observation_shape": (screen_width, screen_height)
    },
    "screen_width": screen_width,
    "screen_height": screen_height,
    "scaling": 1.75,
    "policy_frequency": 2,
    

    The training starts with no errors, but after some steps (around 4000) it crashes due to occupation of all the RAM memory. I tried to reduce batch size (up to 16) and screen width and height (up to 84x84 which is really small) but it doesn't change anything.

    My PC specs are the following: GPU model: NVIDIA Quadro RTX 4000 CUDA version: 10.1 RAM: 32 GB

    My question is if there is something I'm missing that causes the RAM saturation and, mostly, if using Cnn + Grayscale observation would actually result in a performance improvement or if it's a waste of time. Thanks in advance for your help

    opened by lucalazzaroni 13
  • Preventing vehicle from going off-road?

    Preventing vehicle from going off-road?

    I think having a signal indicating the vehicle has gone off the road is useful, and one can terminate the episode with the use of this and shorten the training time for continuous action environments.

    Of course, an alternative is to use a time step limit, but even with that limit put on, you can have a much more informative episode on the road, rather than off-road, in limited time steps.

    opened by mhtb32 13
  • The install of the project

    The install of the project

    Hello! When i run the command in the terminal , the terminal tell me successfully,but i can't find the project in my computer, how can i do next ? Thanks for your reply!!

    opened by zhangxinchen123 12
  • Training highway-v0 with DQN, the trained result is not as good as the one presented in the example video

    Training highway-v0 with DQN, the trained result is not as good as the one presented in the example video

    1. example code

    import gym import highway_env from stable_baselines import DQN model = DQN('MlpPolicy', "highway-fast-v0", policy_kwargs=dict(net_arch=[256, 256]), learning_rate=5e-4, buffer_size=15000, learning_starts=200, batch_size=32, gamma=0.8, train_freq=1, gradient_steps=1, target_update_interval=50, verbose=1, tensorboard_log="highway_dqn/") model.learn(int(2e4)) model.save("highway_dqn/model")

    Question: 1、highway-fast-v0 does not seem to exist 2、the result is not good as example 3、These parameters, such as gradient_steps, target_update_interval, do not exist in stable_baselines.DQN

    opened by limeng-1234 11
  • Separates collision with obstacle and collision with vehicle

    Separates collision with obstacle and collision with vehicle

    For some applications(like what I have in mind), it may be useful to separate vehicle to vehicle collision and vehicle to obstacle collision.

    Please let me know if you think there is something wrong with these changes.

    opened by mhtb32 11
  • How do I get the real-time location of the created vehicle in a custom environment

    How do I get the real-time location of the created vehicle in a custom environment

    Dear author I want to produce a landmark that changes with the state of the vehicle, but I don't know how to get the current location of the vehicle when defining the environment. In the image below, I want the landmark to always appear between car 3 and car 4, but I don't know how to get the real-time location of car 3 and car 4。Could you tell me what to do? Looking forward to your reply.

    image

    opened by lostboy233333 0
  • Save manual control actions

    Save manual control actions

    Hello, author! I have a question. when i set the manual control is true, i would like to record discrete actions. How can I do this please? I look forward to your reply!

    opened by tingtingLiuLiu 0
  • Some angles of the arc road cannot be drawn

    Some angles of the arc road cannot be drawn

    Dear Author, Thank you very much for your wonderful work! I got some problems when I was customizing the environment, hope to get some advice from you~ I found that in some cases the arc road could not be drawn. For example, in the following picture, the arc segment with an Angle of 180 to 90 could not be drawn, so my road could not be closed. I wonder if you have any good solutions? Thank you very much for taking the time to answer my questions. image_4

    opened by lostboy233333 4
  • Training Parking_her does not work.

    Training Parking_her does not work.

    Good afternoon, I was trying to train a policy for the parking-env to test against safety validation methods. When I tried to run the code on colab as it is, I was getting an error when creating the environment. AttributeError: 'ParkingEnv' object has no attribute 'np_random' This error could be solved by reinstalling the highway-env or initially installing an older version of gym and highway. After doing this, an error occurs in creating the model before training. TypeError: init() got an unexpected keyword argument 'create_eval_env'

    It would be much appreciated if you have any insight on how to solve this problem. My research focuses more on the verification side than training or developing a controller to test. I don't have as much experience in training controllers with RL.

    Training_Code_Error_Parking

    opened by JoshYank 7
  • [Feature] Allow `config` to be set in `reset(options=...)`

    [Feature] Allow `config` to be set in `reset(options=...)`

    Thanks for the project, it is very high quality. Would it be possible for the configures, in particular, .config to be set at the option parameter in reset as this is the spirit of the API?

    Im happy to add the PR if you are interested

    opened by pseudo-rnd-thoughts 1
Releases(v1.7.1)
  • v1.7.1(Dec 19, 2022)

  • v1.7(Nov 6, 2022)

  • v1.6(Aug 14, 2022)

    • fix a bug in generating discrete actions from continuous actions
    • fix more bugs related to changes in gym's latest versions
    • new intersection-env variant with continuous actions
    • add longitudinal/lateral/angular offsets to the lane as part of the kinematics observation's features
    • add more configurable options for reward function and termination conditions
    • add configurable min/max speed for continuous actions
    • bug fix for reward computation in the multi-agent setting
    • add get_available_actions for MultiAgentAction
    • fix various deprecation warnings
    • add a multi-objective version of HighwayEnv

    Huge thanks to contributors @zerongxi, @TibiGG, @KexianShen, @lorandcheng

    Source code(tar.gz)
    Source code(zip)
  • v1.5(Mar 19, 2022)

    • Add documentation on continuous actions
    • Fix various bugs or imprecision in collision checks and obstacles rendering
    • Image observations are now centered on the observer vehicle
    • Fix the lane change behaviour in some situations
    • Add TupleObservation, which is a union of several observation types
    • Improve the accuracy of the LidarObservation
    • Add support for PolyLane, and methods to save/load road networks from a config
    • Fix steering wheel / angle conversion
    • Change of the velocity term projection in the reward function
    • Add support for latest gym versions (>=0.22) which dropped the Monitor wrapper
    • Add a copy of the GoalEnv interface which was removed from gym
    Source code(tar.gz)
    Source code(zip)
  • v1.4(Sep 21, 2021)

    This release introduces additional content:

    • a new continuous control environment, racetrack-v0, where the agent must learn to steer and follow the tracks, while avoiding other vehicles
    • a new "on_road" layer in the OccupancyGrid observation type, which enables the observer to see the drivable space
    • a new "align_to_vehicle_axes" option in the OccupancyGrid observation type, which renders the observation in the local vehicle frame
    • a new DiscreteAction action type, which discretizes the original ContinuousAction type. This allows to do low-level control, but with a small discrete action space (e.g. for DQN). Note that this is different from the DiscreteMetaAction type, which implements its own low-level sub-policies.
    • new example scripts and notebooks for training agents, such as a PPO continuous control policy for racetrack-v0.
    • updated documentation
    Source code(tar.gz)
    Source code(zip)
  • v1.3(Aug 30, 2021)

    This release contains

    • A few fixes for compatibility with SB3
    • Some changes for video rendering and framerate
    • highway-fast-v0: a faster variant of highway-v0 to train/debug models more quickly
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Apr 29, 2021)

  • v1.1(Mar 12, 2021)

Owner
Edouard Leurent
Research Scientist @DeepMind
Edouard Leurent
基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

37 Jan 01, 2023
GarmentNets: Category-Level Pose Estimation for Garments via Canonical Space Shape Completion

GarmentNets This repository contains the source code for the paper GarmentNets: Category-Level Pose Estimation for Garments via Canonical Space Shape

Columbia Artificial Intelligence and Robotics Lab 43 Nov 21, 2022
FMA: A Dataset For Music Analysis

FMA: A Dataset For Music Analysis Michaël Defferrard, Kirell Benzi, Pierre Vandergheynst, Xavier Bresson. International Society for Music Information

Michaël Defferrard 1.8k Dec 29, 2022
Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

On Adversarial Robustness: A Neural Architecture Search perspective Preparation: Clone the repository: https://github.com/tdchaitanya/nas-robustness.g

Chaitanya Devaguptapu 4 Nov 10, 2022
Perform Linear Classification with Multi-way Data

MultiwayClassification This is an R package to perform linear classification for data with multi-way structure. The distance-weighted discrimination (

Eric F. Lock 2 Dec 15, 2020
Download and preprocess popular sequential recommendation datasets

Sequential Recommendation Datasets This repository collects some commonly used sequential recommendation datasets in recent research papers and provid

125 Dec 06, 2022
Image Segmentation Evaluation

Image Segmentation Evaluation Martin Keršner, [email protected] Evaluation

Martin Kersner 273 Oct 28, 2022
A Transformer-Based Siamese Network for Change Detection

ChangeFormer: A Transformer-Based Siamese Network for Change Detection (Under review at IGARSS-2022) Wele Gedara Chaminda Bandara, Vishal M. Patel Her

Wele Gedara Chaminda Bandara 214 Dec 29, 2022
[CVPR 2021] Generative Hierarchical Features from Synthesizing Images

[CVPR 2021] Generative Hierarchical Features from Synthesizing Images

GenForce: May Generative Force Be with You 148 Dec 09, 2022
Old Photo Restoration (Official PyTorch Implementation)

Bringing Old Photo Back to Life (CVPR 2020 oral)

Microsoft 11.3k Dec 30, 2022
Pytorch re-implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text Recognition (CVPR 2022)

SwinTextSpotter This is the pytorch implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text R

mxin262 183 Jan 03, 2023
Turi Create simplifies the development of custom machine learning models.

Quick Links: Installation | Documentation | WWDC 2019 | WWDC 2018 Turi Create Check out our talks at WWDC 2019 and at WWDC 2018! Turi Create simplifie

Apple 10.9k Jan 01, 2023
Implement Decoupled Neural Interfaces using Synthetic Gradients in Pytorch

disclaimer: this code is modified from pytorch-tutorial Image classification with synthetic gradient in Pytorch I implement the Decoupled Neural Inter

Andrew 114 Dec 22, 2022
Pytorch code for "DPFM: Deep Partial Functional Maps" - 3DV 2021 (Oral)

DPFM Code for "DPFM: Deep Partial Functional Maps" - 3DV 2021 (Oral) Installation This implementation runs on python = 3.7, use pip to install depend

Souhaib Attaiki 29 Oct 03, 2022
This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).

MoEBERT This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022). Installation Create an

Simiao Zuo 34 Dec 24, 2022
Re-implementation of the vector capsule with dynamic routing

VectorCapsule Re-implementation of the vector capsule with dynamic routing We implement the vector capsule and dynamic routing via graph neural networ

ZhenchaoTang 10 Feb 10, 2022
Code, Models and Datasets for OpenViDial Dataset

OpenViDial This repo contains downloading instructions for the OpenViDial dataset in 《OpenViDial: A Large-Scale, Open-Domain Dialogue Dataset with Vis

119 Dec 08, 2022
Doing the asl sign language classification on static images using graph neural networks.

SignLangGNN When GNNs 💜 MediaPipe. This is a starter project where I tried to implement some traditional image classification problem i.e. the ASL si

10 Nov 09, 2022
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Collie do

ShopRunner 96 Dec 29, 2022
This code is a near-infrared spectrum modeling method based on PCA and pls

Nirs-Pls-Corn This code is a near-infrared spectrum modeling method based on PCA and pls 近红外光谱分析技术属于交叉领域,需要化学、计算机科学、生物科学等多领域的合作。为此,在(北邮邮电大学杨辉华老师团队)指导下

Fu Pengyou 6 Dec 17, 2022