Deep Reinforcement Learning based autonomous navigation for quadcopters using PPO algorithm.

Overview

PPO-based Autonomous Navigation for Quadcopters

license

This repository contains an implementation of Proximal Policy Optimization (PPO) for autonomous navigation in a corridor environment with a quadcopter. There are blocks having circular opening for the drone to go through for each 4 meters. The expectation is that the agent navigates through these openings without colliding with blocks. This project currently runs only on Windows since Unreal environments were packaged for Windows.

🛠️ Libraries & Tools

Overview

The training environment has 9 sections with different textures and hole positions. The agent starts at these sections randomly. The starting point of the agent is also random within a specific region in the yz-plane.

Observation Space

  • State is in the form of a RGB image taken by the front camera of the agent.
  • Image shape: 50 x 50 x 3

Action Space

  • There are 9 discrete actions.

Environment setup to run the codes

#️⃣ 1. Clone the repository

git clone https://github.com/bilalkabas/PPO-based-Autonomous-Navigation-for-Quadcopters

#️⃣ 2. From Anaconda command prompt, create a new conda environment

I recommend you to use Anaconda or Miniconda to create a virtual environment.

conda create -n ppo_drone python==3.8

#️⃣ 3. Install required libraries

Inside the main directory of the repo

conda activate ppo_drone
pip install -r requirements.txt

#️⃣ 4. (Optional) Install Pytorch for GPU

You must have a CUDA supported NVIDIA GPU.

Details for installation

For this project, I used CUDA 11.0 and the following conda installation command to install Pytorch:

conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch

#️⃣ 4. Edit settings.json

Content of the settings.json should be as below:

The setting.json file is located at Documents\AirSim folder.

{
    "SettingsVersion": 1.2,
    "LocalHostIp": "127.0.0.1",
    "SimMode": "Multirotor",
    "ClockSpeed": 20,
    "ViewMode": "SpringArmChase",
    "Vehicles": {
        "drone0": {
            "VehicleType": "SimpleFlight",
            "X": 0.0,
            "Y": 0.0,
            "Z": 0.0,
            "Yaw": 0.0
        }
    },
    "CameraDefaults": {
        "CaptureSettings": [
            {
                "ImageType": 0,
                "Width": 50,
                "Height": 50,
                "FOV_Degrees": 120
            }
        ]
    }
  }

How to run the training?

Make sure you followed the instructions above to setup the environment.

#️⃣ 1. Download the training environment

Go to the releases and download TrainEnv.zip. After downloading completed, extract it.

#️⃣ 2. Now, you can open up environment's executable file and start the training

So, inside the repository

python main.py

How to run the pretrained model?

Make sure you followed the instructions above to setup the environment. To speed up the training, the simulation runs at 20x speed. You may consider to change the "ClockSpeed" parameter in settings.json to 1.

#️⃣ 1. Download the test environment

Go to the releases and download TestEnv.zip. After downloading completed, extract it.

#️⃣ 2. Now, you can open up environment's executable file and run the trained model

So, inside the repository

python policy_run.py

Training results

The trained model in saved_policy folder was trained for 280k steps.

Picture2

Test results

The test environment has different textures and hole positions than that of the training environment. For 100 episodes, the trained model is able to travel 17.5 m on average and passes through 4 holes on average without any collision. The agent can pass through at most 9 holes in test environment without any collision.

Author

License

This project is licensed under the GNU Affero General Public License.

You might also like...
Tackling Obstacle Tower Challenge using PPO & A2C combined with ICM.
Tackling Obstacle Tower Challenge using PPO & A2C combined with ICM.

Obstacle Tower Challenge using Deep Reinforcement Learning Unity Obstacle Tower is a challenging realistic 3D, third person perspective and procedural

Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions. A clean and robust Pytorch implementation of PPO on continuous action space.
A clean and robust Pytorch implementation of PPO on continuous action space.

PPO-Continuous-Pytorch I found the current implementation of PPO on continuous action space is whether somewhat complicated or not stable. And this is

PPO Lagrangian in JAX

PPO Lagrangian in JAX This repository implements PPO in JAX. Implementation is tested on the safety-gym benchmark. Usage Install dependencies using th

GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.
GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.

GndNet: Fast Ground plane Estimation and Point Cloud Segmentation for Autonomous Vehicles. Authors: Anshul Paigwar, Ozgur Erkent, David Sierra Gonzale

Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

Comments
  • A warning I met during I perform

    A warning I met during I perform "python policy_run.py"

    I have followed each step as suggested by the readme. However, I encounter the problem as follow:

    WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED Traceback (most recent call last): File "policy_run.py", line 14, in env = DummyVecEnv([lambda: Monitor( File "E:\Anaconda\envs\PPO_drone\lib\site-packages\stable_baselines3\common\vec_env\dummy_vec_env.py", line 25, in init self.envs = [fn() for fn in env_fns] File "E:\Anaconda\envs\PPO_drone\lib\site-packages\stable_baselines3\common\vec_env\dummy_vec_env.py", line 25, in self.envs = [fn() for fn in env_fns] File "policy_run.py", line 15, in gym.make( File "E:\Anaconda\envs\PPO_drone\lib\site-packages\gym\envs\registration.py", line 235, in make return registry.make(id, **kwargs) File "E:\Anaconda\envs\PPO_drone\lib\site-packages\gym\envs\registration.py", line 129, in make env = spec.make(kwargs) File "E:\Anaconda\envs\PPO_drone\lib\site-packages\gym\envs\registration.py", line 90, in make env = cls(_kwargs) File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 169, in init super(TestEnv, self).init(ip_address, image_shape, env_config) File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 19, in init self.setup_flight() File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 174, in setup_flight super(TestEnv, self).setup_flight() File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 36, in setup_flight self.drone.reset() File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim\client.py", line 26, in reset self.client.call('reset') File "E:\Anaconda\envs\PPO_drone\lib\site-packages\msgpackrpc\session.py", line 41, in call return self.send_request(method, args).get() File "E:\Anaconda\envs\PPO_drone\lib\site-packages\msgpackrpc\future.py", line 43, in get raise self._error msgpackrpc.error.TransportError: Retry connection over the limit

    I would be grateful if anyone could tell me how to fix this.

    opened by XiAoSSuper 1
Releases(v1.0.0-windows)
Owner
Bilal Kabas
BSc., Electrical & Electronics Engineering, Undergraduate Researcher: Robotics, Computer Vision, ML & DL
Bilal Kabas
BiSeNet based on pytorch

BiSeNet BiSeNet based on pytorch 0.4.1 and python 3.6 Dataset Download CamVid dataset from Google Drive or Baidu Yun(6xw4). Pretrained model Download

367 Dec 26, 2022
T2F: text to face generation using Deep Learning

⭐ [NEW] ⭐ T2F - 2.0 Teaser (coming soon ...) Please note that all the faces in the above samples are generated ones. The T2F 2.0 will be using MSG-GAN

Animesh Karnewar 533 Dec 22, 2022
EfficientDet (Scalable and Efficient Object Detection) implementation in Keras and Tensorflow

EfficientDet This is an implementation of EfficientDet for object detection on Keras and Tensorflow. The project is based on the official implementati

1.3k Dec 19, 2022
Learning Tracking Representations via Dual-Branch Fully Transformer Networks

Learning Tracking Representations via Dual-Branch Fully Transformer Networks DualTFR ⭐ We achieves the runner-ups for both VOT2021ST (short-term) and

phiphi 19 May 04, 2022
Graph Attention Networks

GAT Graph Attention Networks (Veličković et al., ICLR 2018): https://arxiv.org/abs/1710.10903 GAT layer t-SNE + Attention coefficients on Cora Overvie

Petar Veličković 2.6k Jan 05, 2023
Python library for loading and using triangular meshes.

Trimesh is a pure Python (2.7-3.4+) library for loading and using triangular meshes with an emphasis on watertight surfaces. The goal of the library i

Michael Dawson-Haggerty 2.2k Jan 07, 2023
Simple machine learning library / 簡單易用的機器學習套件

FukuML Simple machine learning library / 簡單易用的機器學習套件 Installation $ pip install FukuML Tutorial Lesson 1: Perceptron Binary Classification Learning Al

Fukuball Lin 279 Sep 15, 2022
A Python multilingual toolkit for Sentiment Analysis and Social NLP tasks

pysentimiento: A Python toolkit for Sentiment Analysis and Social NLP tasks A Transformer-based library for SocialNLP classification tasks. Currently

298 Jan 07, 2023
Compressed Video Action Recognition

Compressed Video Action Recognition Chao-Yuan Wu, Manzil Zaheer, Hexiang Hu, R. Manmatha, Alexander J. Smola, Philipp Krähenbühl. In CVPR, 2018. [Proj

Chao-Yuan Wu 479 Dec 26, 2022
Message Passing on Cell Complexes

CW Networks This repository contains the code used for the papers Weisfeiler and Lehman Go Cellular: CW Networks (Under review) and Weisfeiler and Leh

Twitter Research 108 Jan 05, 2023
This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Deformable Neural Radiance Fields This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies. Project Page Paper Video This codebase conta

Google 1k Jan 09, 2023
Training RNNs as Fast as CNNs

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
Attentional Focus Modulates Automatic Finger‑tapping Movements

"Attentional Focus Modulates Automatic Finger‑tapping Movements", in Scientific Reports

Xingxun Jiang 1 Dec 02, 2021
An implementation of based on pytorch and mmcv

FisherPruning-Pytorch An implementation of Group Fisher Pruning for Practical Network Compression based on pytorch and mmcv Main Functions Pruning f

Peng Lu 15 Dec 17, 2022
Back to Basics: Efficient Network Compression via IMP

Back to Basics: Efficient Network Compression via IMP Authors: Max Zimmer, Christoph Spiegel, Sebastian Pokutta This repository contains the code to r

IOL Lab @ ZIB 1 Nov 19, 2021
A list of all papers and resoureces on Semantic Segmentation

Semantic-Segmentation A list of all papers and resoureces on Semantic Segmentation. Dataset importance SemanticSegmentation_DL Some implementation of

Alan Tang 1.1k Dec 12, 2022
Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

Kai Zhang 2k Dec 31, 2022
TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios

TPH-YOLOv5 This repo is the implementation of "TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured

cv516Buaa 439 Dec 22, 2022
Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning

Manifold-SCA Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning The repo is org

Yuanyuan Yuan 172 Dec 29, 2022
Fast and accurate optimisation for registration with little learningconvexadam

convexAdam Learn2Reg 2021 Submission Fast and accurate optimisation for registration with little learning Excellent results on Learn2Reg 2021 challeng

17 Dec 06, 2022