Rainbow: Combining Improvements in Deep Reinforcement Learning

Overview

Rainbow

MIT License

Rainbow: Combining Improvements in Deep Reinforcement Learning [1].

Results and pretrained models can be found in the releases.

  • DQN [2]
  • Double DQN [3]
  • Prioritised Experience Replay [4]
  • Dueling Network Architecture [5]
  • Multi-step Returns [6]
  • Distributional RL [7]
  • Noisy Nets [8]

Run the original Rainbow with the default arguments:

python main.py

Data-efficient Rainbow [9] can be run using the following options (note that the "unbounded" memory is implemented here in practice by manually setting the memory capacity to be the same as the maximum number of timesteps):

python main.py --target-update 2000 \
               --T-max 100000 \
               --learn-start 1600 \
               --memory-capacity 100000 \
               --replay-frequency 1 \
               --multi-step 20 \
               --architecture data-efficient \
               --hidden-size 256 \
               --learning-rate 0.0001 \
               --evaluation-interval 10000

Note that pretrained models from the 1.3 release used a (slightly) incorrect network architecture. To use these, change the padding in the first convolutional layer from 0 to 1 (DeepMind uses "valid" (no) padding).

Requirements

To install all dependencies with Anaconda run conda env create -f environment.yml and use source activate rainbow to activate the environment.

Available Atari games can be found in the atari-py ROMs folder.

Acknowledgements

References

[1] Rainbow: Combining Improvements in Deep Reinforcement Learning
[2] Playing Atari with Deep Reinforcement Learning
[3] Deep Reinforcement Learning with Double Q-learning
[4] Prioritized Experience Replay
[5] Dueling Network Architectures for Deep Reinforcement Learning
[6] Reinforcement Learning: An Introduction
[7] A Distributional Perspective on Reinforcement Learning
[8] Noisy Networks for Exploration
[9] When to Use Parametric Models in Reinforcement Learning?

Comments
  • Prioritised Experience Replay

    Prioritised Experience Replay

    I am interested in implementing Rainbow too. I didn't go deep in code for the moment, but I just saw on the Readme.md that Prioritised Experience Replay is not checked. Will this feature be implemented or it is maybe already working? On their paper, Deepmind are actually showing that Prioritized Experience Replay is the most important feature, that means the "no priority" got the bigger performance gap with the full Rainbow.

    bug help wanted 
    opened by marintoro 28
  • Replicating DeepMind results

    Replicating DeepMind results

    As of 5c252ea, this repo has been checked over several times for discrepancies, but is still unable to replicate DeepMind's results. This issue is to discuss any further points that may need fixing.

    • [x] Should the loss be averaged or summed over the minibatch?
    • [x] Should noisy network updating use independent noise per transition in the batch [v1] or the same noise but another noise sample for action selection [v2]?
    • [x] Is the max priority over all time, or just from the current buffer (may be the former)? Results and paper indicate former.
    • [x] Are priorities added as δ, or δ + ε (ε may not be needed with a KL loss)? One single ablation run indicates adding ε causes performance to drop more at end of training. δ + ε shouldn't be needed with a KL loss,
    • [x] Most people implement PER by adding priorities already multiplied by α, but the maths indicates that the raw values should be stored and sampling should be done with respect to everything to the power of α? α isn't changed here - so not an issue.

    Space Invaders (averaged losses): newplot

    Space Invaders (summed losses): newplot

    help wanted 
    opened by Kaixhin 24
  • Resume support

    Resume support

    Added preliminary support for resuming. Initial testing looks like it works, but I'd appreciate if anyone else gets a chance to play with it in their setup.

    I didn't add an explicit resume flag, although we could do that. Currently, the assumption is that if you provide the --memory-save-path argument, you want the memory saved there, by default after every testing round. If you provide the --model argument and do not provide the --evaluateflag, the assumption is that you want to resume, and that --memory-save-path exists.

    Another flag we could add is a --T_start flag, akin to --T_max, in order to specify where training is resuming from to better the logging of resumed models. What do you think?

    Choosing to compress at all, and choosing to use bz2 specifically, came after a quick benchmark I did with some pickled memories I had. It drops them from ~2GB to <100 MB, and bz2 took somewhere around 2-3 minutes, while pickling without it took around 40 seconds.

    opened by guydav 14
  • Performance of release v1.0 on Space Invaders

    Performance of release v1.0 on Space Invaders

    I just launched the release v1.0 (commit 952fcb4) on Space Invaders for the whole week-end (around 25M steps). I took the exact same code with the exact same random seed. I got really lower performance than the one you are showing. Here are the plots of rewards and Q-values q_values_v1 0 reward_v1 0

    Could you explain exactly how you got your results for this release? Did you try multiple experiments with different random seed and average them or just took the best one of them? Or maybe it's a pytorch, atari_py or any other library issue? Could you give all your library version?

    opened by marintoro 13
  • Testing should be not deterministic

    Testing should be not deterministic

    There is a parameter --evaluation-episodes but in the current implementation, like we are always acting greedly, all the episodes are going to be exactly the same. I think that to get a better testing evaluation, you should add a deterministic=False when you are testing (i.e. in stead of taking the action with the higher Q value, you can sample on all the action with each Q value as the probability) .

    I implemented that on my branch on the last commit [email protected] (it's really straightforward)

    Btw I launched a training last night, everything worked properly. But I don't have access to a powerfull computer yet so the agent was still pretty poor in performance (in the early stage of training). I just wanted to know if you already launched a big training, on which game and if you compared it to a standard DRL algo (like simple DQN for example)? Because there may still be some non-breaking errors in the implementation which could be sneaky to spot and debug (I mean if the agent is learning worse than simple DQN, there must be something wrong for example).

    opened by marintoro 8
  • TypeError: stack(): argument 'tensors' (position 1) must be tuple of Tensors, not collections.deque

    TypeError: stack(): argument 'tensors' (position 1) must be tuple of Tensors, not collections.deque

    Traceback (most recent call last): File "main.py", line 81, in state, done = env.reset(), False File "C:\Users\simon\Desktop\DQN\RL-AlphaGO\Rainbow-master\env.py", line 53, in reset return torch.stack(self.state_buffer, 0) TypeError: stack(): argument 'tensors' (position 1) must be tuple of Tensors, not collections.deque

    Could somebody give a hand?

    opened by forhonourlx 7
  • disable env.reset() after every episode

    disable env.reset() after every episode

    Hi, May I check if I would like to keep the environment as it is after each training episode, should I just comment line line 147 in main.py or should I also comment line 130? Besides what am I supposed to do if I just want to reset the agent's position but keep the environment as it is after each training episode?

    Thank you.

    question 
    opened by zyzhang1130 5
  • TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

    TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

    ...... self.actions.get(action): 4 self.actions.get(action): 4 self.actions.get(action): 4 self.actions.get(action): 4 self.actions.get(action): 1 self.actions.get(action): 1 self.actions.get(action): 1 self.actions.get(action): 1 self.actions.get(action): None

    Traceback (most recent call last): File "main.py", line 103, in next_state, reward, done = env.step(action) # Step File "C:\Users\simon\Desktop\DQN\RL-AlphaGO\Rainbow-master\env.py", line 63, in step reward += self.ale.act(self.actions.get(action)) File "C:\Program Files\Python35\lib\site-packages\atari_py\ale_python_interface.py", line 159, in act return ale_lib.act(self.obj, int(action)) TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

    opened by forhonourlx 5
  • Unit test Prioritised Experience Replay Memory

    Unit test Prioritised Experience Replay Memory

    PER was reported to cause issues (decreasing the performance of a DQN) when ported to another codebase. Although PER can cause performance to decrease, it is still likely that there exists a bug within it.

    bug 
    opened by Kaixhin 5
  • Policy and reward function

    Policy and reward function

    Hi, There is certain thing I would like to modify for policy and reward function. May I ask where is policy stored after each epoch of training? Is there some way to call/index/assign it with some flag? Thanks for answering.

    opened by zyzhang1130 4
  • Memory capacity for example data-efficient Rainbow?

    Memory capacity for example data-efficient Rainbow?

    Hi folks,

    I'm running the data-efficient Rainbow as a baseline for a project I'm starting, and one thing isn't making sense in my head. The original Rainbow paper uses a 1M transition buffer, and comparatively, the data-efficient paper (Appendix E) claims to use an unbounded memory.

    Do you have any sense of what does an unbounded memory even mean in practice? Is there any particular reason you chose to make it smaller than the default Rainbow's memory buffer, rather than larger?

    Thank you!

    question 
    opened by guydav 4
  • A problem about one game in ALE cannot be trained

    A problem about one game in ALE cannot be trained

    Hi, Kai! I find an issue which happens when I set the game "defender" as the environment. It only displays hyper-parameter setting "args", and however, any training results aren't output, not as the same as other games.

    Thanks!

    bug 
    opened by Hugh-Cai 1
  • Stuck in memory._retrieve when batch size  > 32

    Stuck in memory._retrieve when batch size > 32

    Hi,

    I notice that RAINBOW doesn't work when the batch size is greater than 32 (I tried for 64, 128, 256), where it is stuck in memory._retrieve the recursive call. Why does this happen? Is there something that I can do about this (to increase the batch size) or batch size needs to be small?

    Thanks

    question 
    opened by jiwoongim 1
  • Is the evluation procedure different?

    Is the evluation procedure different?

    Hi Kai,

    In the Rainbow paper, the evaluation procedure is described as

    The average scores of the agent are evaluated during training, every 1M steps in the environment, by suspending learning and evaluating the latest agent for 500K frames. Episodes are truncated at 108K frames (or 30 minutes of simulated play).

    However, the code as written tests for a fixed number of episodes. Am I missing anything? Or is this the procedure from the data-efficient Rainbow paper (I couldn't find a detailed description there).

    Thanks!

    enhancement question 
    opened by guydav 8
  • Human-expert normalized scores

    Human-expert normalized scores

    The Rainbow DQN paper uses human-expert normalized scores, so I am not sure how to evaluate the training results against the original paper. Do you know what values were used for human expert scores?

    I found snippets of the values used from papers here and there, but not sure if we can use the same number and how we can compute a single normalized value for all Atari games: image

    opened by ThisIsIsaac 4
  • Pinned memory experience replay

    Pinned memory experience replay

    A more efficient implementation would allocate a giant tensor in advance for each item (e.g. state, action) in a transition tuple, furthermore pin it (as long as the machine has enough RAM spare - should be at least 6GB?), and use asynchronous copies to GPU.

    enhancement 
    opened by Kaixhin 0
Releases(1.4)
Owner
Kai Arulkumaran
Researcher, programmer, DJ, transhumanist.
Kai Arulkumaran
Drone Task1 - Drone Task1 With Python

Drone_Task1 Matching Results 3.mp4 1.mp4

MLV Lab (Machine Learning and Vision Lab at Korea University) 11 Nov 14, 2022
This folder contains the implementation of the multi-relational attribute propagation algorithm.

MrAP This folder contains the implementation of the multi-relational attribute propagation algorithm. It requires the package pytorch-scatter. Please

6 Dec 06, 2022
Drslmarkov - Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

1 Nov 24, 2022
Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Ali Aliev 15.3k Jan 05, 2023
Code for CPM-2 Pre-Train

CPM-2 Pre-Train Pre-train CPM-2 此分支为110亿非 MoE 模型的预训练代码,MoE 模型的预训练代码请切换到 moe 分支 CPM-2技术报告请参考link。 0 模型下载 请在智源资源下载页面进行申请,文件介绍如下: 文件名 描述 参数大小 100000.tar

Tsinghua AI 136 Dec 28, 2022
Emblaze - Interactive Embedding Comparison

Emblaze - Interactive Embedding Comparison Emblaze is a Jupyter notebook widget for visually comparing embeddings using animated scatter plots. It bun

CMU Data Interaction Group 77 Nov 24, 2022
Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge

Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge This is an implementation of the paper,

Mutian He 19 Oct 14, 2022
Memory-Augmented Model Predictive Control

Memory-Augmented Model Predictive Control This repository hosts the source code for the journal article "Composing MPC with LQR and Neural Networks fo

Fangyu Wu 1 Jun 19, 2022
Hyperparameter tuning for humans

KerasTuner KerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily c

Keras 2.6k Dec 27, 2022
Pytorch implementation of Nueral Style transfer

Nueral Style Transfer Pytorch implementation of Nueral style transfer algorithm , it is used to apply artistic styles to content images . Content is t

Abhinav 9 Oct 15, 2022
Deep Multimodal Neural Architecture Search

MMNas: Deep Multimodal Neural Architecture Search This repository corresponds to the PyTorch implementation of the MMnas for visual question answering

Vision and Language Group@ MIL 23 Dec 21, 2022
Scaling Vision with Sparse Mixture of Experts

Scaling Vision with Sparse Mixture of Experts This repository contains the code for training and fine-tuning Sparse MoE models for vision (V-MoE) on I

Google Research 290 Dec 25, 2022
Classifying cat and dog images using Kaggle dataset

PyTorch Image Classification Classifies an image as containing either a dog or a cat (using Kaggle's public dataset), but could easily be extended to

Robert Coleman 74 Nov 22, 2022
Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Support Vector Machine".

On the Equivalence between Neural Network and Support Vector Machine Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Suppo

Leslie 8 Oct 25, 2022
Huawei Hackathon 2021 - Sweden (Stockholm)

huawei-hackathon-2021 Contributors DrakeAxelrod Challenge Requirements: python=3.8.10 Standard libraries (no importing) Important factors: Data depend

Drake Axelrod 32 Nov 08, 2022
Generative Handwriting using LSTM Mixture Density Network with TensorFlow

Generative Handwriting Demo using TensorFlow An attempt to implement the random handwriting generation portion of Alex Graves' paper. See my blog post

hardmaru 686 Nov 24, 2022
Manim is an engine for precise programmatic animations, designed for creating explanatory math videos

Manim is an engine for precise programmatic animations, designed for creating explanatory math videos. Note, there are two versions of manim. This rep

Grant Sanderson 49k Jan 09, 2023
Using a Seq2Seq RNN architecture via TensorFlow to predict future Bitcoin prices

Recurrent Bitcoin Network A Data Science Thesis Project About This repository contains the source code for implementing Bitcoin price prediciton using

Frizu 6 Sep 08, 2022
Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Gyeongjae Choi 17 Sep 23, 2021
This app is a simple example of using Strealit to create a financial data web app.

Streamlit Demo: Finance Chart This app is a simple example of using Streamlit to create a financial data web app. This demo use streamlit, pandas and

91 Jan 02, 2023