Deep Reinforcement Learning based Trading Agent for Bitcoin

Overview

Deep Trading Agent

license dep1 dep2 dep3 dep4 dep4
Deep Reinforcement Learning based Trading Agent for Bitcoin using DeepSense Network for Q function approximation.

model
For complete details of the dataset, preprocessing, network architecture and implementation, refer to the Wiki of this repository.

Requirements

  • Python 2.7
  • Tensorflow
  • Pandas (for pre-processing Bitcoin Price Series)
  • tqdm (for displaying progress of training)

To setup a ubuntu virtual machine with all the dependencies to run the code, refer to assets/vm.

Run with Docker

Pull the prebuilt docker image directly from docker hub and run it as

docker pull samre12/deep-trading-agent:latest
docker run -p 6006:6006 -it samre12/deep-trading-agent:latest

OR

Build the docker image locally by executing the command and the run the image as

docker build -t deep-trading-agent .
docker run -p 6006:6006 -it deep-trading-agent

This will setup the repository for training the agent and

  • mount the current directory into /deep-trading-agent in the container

  • during image build, the latest transactions history from the exchange is pulled and sampled to create per-minute scale dataset of Bitcoin prices. This dataset is placed at /deep-trading-agent/data/btc.csv

  • to initiate training of the agent, specify suitable parameters in a config file (an example config file is provided at /deep-trading-agent/code/config/config.cfg) and run the code using /deep-trading-agent/code/main.py

  • training supports logging and monitoring through Tensorboard

  • vim and screen are installed in the container to edit the configuration files and run tensorboard

  • bind port 6006 of container to 6006 of host machine to monitor training using Tensorboard

Support

Please give a to this repository to support the project 😄 .

ToDo

Docker Support

  • Add Docker support for a fast and easy start with the project

Improve Model performance

  • Extract highest and lowest prices and the volume of Bitcoin traded within a given time interval in the Preprocessor
  • Use closing, highest, lowest prices and the volume traded as input channels to the model (remove features calculated just using closing prices)
  • Normalize the price tensors using the price of the previous time step
  • For the complete state representation, input the remaining number of trades to the model
  • Use separate diff price blocks to calculate the unrealized PnL
  • Use exponentially decayed weighted unrealized PnL as a reward function to incorporate current state of investment and stabilize the learning of the agent

Trading Model

is inspired by Deep Q-Trading where they solve a simplified trading problem for a single asset.
For each trading unit, only one of the three actions: neutral(1), long(2) and short(3) are allowed and a reward is obtained depending upon the current position of agent. Deep Q-Learning agent is trained to maximize the total accumulated rewards.
Current Deep Q-Trading model is modified by using the Deep Sense architecture for Q function approximation.

Dataset

Per minute Bitcoin series is obtained by modifying the procedure mentioned in this repository. Transactions in the Coinbase exchange are sampled to generate the Bitcoin price series.
Refer to assets/dataset to download the dataset.

Preprocessing

Basic Preprocessing
Completely ignore missing values and remove them from the dataset and accumulate blocks of continuous values using the timestamps of the prices.
All the accumulated blocks with number of timestamps lesser than the combined history length of the state and horizon of the agent are then filtered out since they cannot be used for training of the agent.
In the current implementation, past 3 hours (180 minutes) of per minute Bitcoin prices are used to generate the representation of the current state of the agent.
With the existing dataset (at the time of writing), following are the logs generated while preprocessing the dataset:

INFO:root:Number of blocks of continuous prices found are 58863
INFO:root:Number of usable blocks obtained from the dataset are 887
INFO:root:Number of distinct episodes for the current configuration are 558471

Advanced Preprocessing
Process missing values and concatenate smaller blocks to increase the sizes of continuous price blocks.
Standard technique in literature to fill the missing values in a way that does not much affect the performance of the model is using exponential filling with no decay.
(To be implemented)

Implementation

Tensorflow "1.1.0" version is used for the implementation of the Deep Sense network.

Deep Sense

Implementation is adapted from this Github repository with a few simplifications in the network architecture to incorporate learning over a single time series of the Bitcoin data.

Deep Q Trading

Implementation and preprocessing is inspired from this Medium post. The actual implementation of the Deep Q Network is adapted from DQN-tensorflow.

Owner
Kartikay Garg
Major in Mathematics and Computing
Kartikay Garg
Code for DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning Pytorch Implementation for DisCo: Remedy Self-supervi

79 Jan 06, 2023
Extracting knowledge graphs from language models as a diagnostic benchmark of model performance.

Interpreting Language Models Through Knowledge Graph Extraction Idea: How do we interpret what a language model learns at various stages of training?

EPFL Machine Learning and Optimization Laboratory 9 Oct 25, 2022
Apollo optimizer in tensorflow

Apollo Optimizer in Tensorflow 2.x Notes: Warmup is important with Apollo optimizer, so be sure to pass in a learning rate schedule vs. a constant lea

Evan Walters 1 Nov 09, 2021
Deep-learning-roadmap - All You Need to Know About Deep Learning - A kick-starter

Deep Learning - All You Need to Know Sponsorship To support maintaining and upgrading this project, please kindly consider Sponsoring the project deve

Instill AI 4.4k Dec 26, 2022
Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".

VL-BERT By Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai. This repository is an official implementation of the paper VL-BERT:

Weijie Su 698 Dec 18, 2022
Code for "My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack" paper

Myo Keylogging This is the source code for our paper My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack by Matthias Ga

Secure Mobile Networking Lab 7 Jan 03, 2023
pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

230 Dec 07, 2022
A lightweight deep network for fast and accurate optical flow estimation.

FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation The official PyTorch implementation of FastFlowNet (ICRA 2021). Authors: Lingtong

Tone 161 Jan 03, 2023
Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation

Flexible-CLmser: Regularized Feedback Connections for Biomedical Image Segmentation The skip connections in U-Net pass features from the levels of enc

Boheng Cao 1 Dec 29, 2021
Plugin for Gaffer providing direct acess to asset from PolyHaven.com. Only HDRIs at the moment, Cycles and Arnold supported

GafferHaven Plugin for Gaffer providing direct acess to asset from PolyHaven.com. Only HDRIs are supported at the moment, in Cycles and Arnold lights.

Jakub Vondra 6 Jan 26, 2022
Evolving neural network parameters in JAX.

Evolving Neural Networks in JAX This repository holds code displaying techniques for applying evolutionary network training strategies in JAX. Each sc

Trevor Thackston 6 Feb 12, 2022
Housing Price Prediction

This project aim was to predict the price of houses in the Boston area during the great financial crisis through regression, as well as classify houses into different quality categories according to

Florian Klement 1 Jan 27, 2022
Code used for the results in the paper "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning"

Code used for the results in the paper "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning" Getting started Prerequisites CUD

70 Dec 02, 2022
An unofficial PyTorch implementation of a federated learning algorithm, FedAvg.

Federated Averaging (FedAvg) in PyTorch An unofficial implementation of FederatedAveraging (or FedAvg) algorithm proposed in the paper Communication-E

Seok-Ju Hahn 123 Jan 06, 2023
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8.1k Jan 06, 2023
HyDiff: Hybrid Differential Software Analysis

HyDiff: Hybrid Differential Software Analysis This repository provides the tool and the evaluation subjects for the paper HyDiff: Hybrid Differential

Yannic Noller 22 Oct 20, 2022
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning

Federated_Learning This repo provides a federated learning framework that allows to carry out backdoor attacks under varying conditions. This is a ker

Arçelik ARGE Açık Kaynak Yazılım Organizasyonu 0 Nov 30, 2021
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
AbelNN: Deep Learning Python module from scratch

AbelNN: Deep Learning Python module from scratch I have implemented several neural networks from scratch using only Numpy. I have designed the module

Abel 2 Apr 12, 2022
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Self-Supervised Vision Transformers with DINO PyTorch implementation and pretrained models for DINO. For details, see Emerging Properties in Self-Supe

Facebook Research 4.2k Jan 03, 2023