A Deep Reinforcement Learning Framework for Stock Market Trading

Overview

DQN-Trading

This is a framework based on deep reinforcement learning for stock market trading. This project is the implementation code for the two papers:

The deep reinforcement learning algorithm used here is Deep Q-Learning.

Acknowledgement

Requirements

Install pytorch using the following commands. This is for CUDA 11.1 and python 3.8:

pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
  • python = 3.8
  • pandas = 1.3.2
  • numpy = 1.21.2
  • matplotlib = 3.4.3
  • cython = 0.29.24
  • scikit-learn = 0.24.2

TODO List

  • Right now this project does not have a code for getting user hyper-parameters from terminal and running the code. We preferred writing a jupyter notebook (Main.ipynb) in which you can set the input data, the model, along with setting the hyper-parameters.

  • The project also does not have a code to do Hyper-parameter search (its easy to implement).

  • You can also set the seed for running the experiments in the original code for training the models.

Developers' Guidelines

In this section, I briefly explain different parts of the project and how to change each. The data for the project downloaded from Yahoo Finance where you can search for a specific market there and download your data under the Historical Data section. Then you create a directory with the name of the stock under the data directory and put the .csv file there.

The DataLoader directory contains files to process the data and interact with the RL agent. The DataLoader.py loads the data given the folder name under Data folder and also the name of the .csv file. For this, you should use the YahooFinanceDataLoader class for using data downloaded from Yahoo Finance.

The Data.py file is the environment that interacts with the RL agent. This file contains all the functionalities used in a standard RL environment. For each agent, I developed a class inherited from the Data class that only differs in the state space (consider that states for LSTM and convolutional models are time-series, while for other models are simple OHLCs). In DataForPatternBasedAgent.py the states are patterns extracted using rule-based methods in technical analysis. In DataAutoPatternExtractionAgent.py states are Open, High, Low, and Close prices (plus some other information about the candle-stick like trend, upper shadow, lower shadow, etc). In DataSequential.py as it is obvious from the name, the state space is time-series which is used in both LSTM and Convolutional models. DataSequencePrediction.py was an idea for feeding states that have been predicted using an LSTM model to the RL agent. This idea is raw and needs to be developed.

Where ever we used encoder-decoder architecture, the decoder is the DQN agent whose neural network is the same across all the models.

The DeepRLAgent directory contains the DQN model without encoder part (VanillaInput) whose data loader corresponds to DataAutoPatternExtractionAgent.py and DataForPatternBasedAgent.py; an encoder-decoder model where the encoder is a 1d convolutional layer added to the decoder which is DQN agent under SimpleCNNEncoder directory; an encoder-decoder model where encoder is a simple MLP model and the decoder is DQN agent under MLPEncoder directory.

Under the EncoderDecoderAgent there exist all the time-series models, including CNN (two-layered 1d CNN as encoder), CNN2D (a single-layered 2d CNN as encoder), CNN-GRU (the encoder is a 1d CNN over input and then a GRU on the output of CNN. The purpose of this model is that CNN extracts features from each candlestick, thenGRU extracts temporal dependency among those extracted features.), CNNAttn (A two-layered 1d CNN with attention layer for putting higher emphasis on specific parts of the features extracted from the time-series data), and a GRU encoder which extracts temporal relations among candles. All of these models use DataSequential.py file as environment.

For running each agent, please refer to the Main.py file for instructions on how to run each agent and feed data. The Main.py file also has code for plotting results.

The Objects directory contains the saved models from our experiments for each agent.

The PatternDetectionCandleStick directory contains Evaluation.py file which has all the evaluation metrics used in the paper. This file receives the actions from the agents and evaluate the result of the strategy offered by each agent. The LabelPatterns.py uses rule-based methods to generate buy or sell signals. Also Extract.py is another file used for detecting wellknown candlestick patterns.

RLAgent directory is the implementation of the traditional RL algorithm SARSA-λ using cython. In order to run that in the Main.ipynb you should first build the cython file. In order to do that, run the following script inside it's directory in terminal:

python setup.py build_ext --inplace

This works for both linux and windows.

For more information on the algorithms and models, please refer to the original paper. You can find them in the references.

If you had any questions regarding the paper, code, or you wanted to contribute, please send me an email: [email protected]

References

@article{taghian2020learning,
  title={Learning financial asset-specific trading rules via deep reinforcement learning},
  author={Taghian, Mehran and Asadi, Ahmad and Safabakhsh, Reza},
  journal={arXiv preprint arXiv:2010.14194},
  year={2020}
}

@article{taghian2021reinforcement,
  title={A Reinforcement Learning Based Encoder-Decoder Framework for Learning Stock Trading Rules},
  author={Taghian, Mehran and Asadi, Ahmad and Safabakhsh, Reza},
  journal={arXiv preprint arXiv:2101.03867},
  year={2021}
}
A clean and extensible PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners

A clean and extensible PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners A PyTorch re-implementation of Mask Autoencoder trai

Tianyu Hua 23 Dec 13, 2022
Data Augmentation with Variational Autoencoders

Documentation Pyraug This library provides a way to perform Data Augmentation using Variational Autoencoders in a reliable way even in challenging con

112 Nov 30, 2022
Galactic and gravitational dynamics in Python

Gala is a Python package for Galactic and gravitational dynamics. Documentation The documentation for Gala is hosted on Read the docs. Installation an

Adrian Price-Whelan 101 Dec 22, 2022
N-Omniglot is a large neuromorphic few-shot learning dataset

N-Omniglot [Paper] || [Dataset] N-Omniglot is a large neuromorphic few-shot learning dataset. It reconstructs strokes of Omniglot as videos and uses D

11 Dec 05, 2022
Evaluation framework for testing segmentation networks in PyTorch

Evaluation framework for testing segmentation networks in PyTorch. What segmentation network to choose for next Kaggle competition? This benchmark knows the answer!

Eugene Khvedchenya 37 Apr 27, 2022
ESL: Event-based Structured Light

ESL: Event-based Structured Light Video (click on the image) This is the code for the 2021 3DV paper ESL: Event-based Structured Light by Manasi Mugli

Robotics and Perception Group 29 Oct 24, 2022
Constrained Logistic Regression - How to apply specific constraints to logistic regression's coefficients

Constrained Logistic Regression Sample implementation of constructing a logistic regression with given ranges on each of the feature's coefficients (v

1 Dec 29, 2021
Fewshot-face-translation-GAN - Generative adversarial networks integrating modules from FUNIT and SPADE for face-swapping.

Few-shot face translation A GAN based approach for one model to swap them all. The table below shows our priliminary face-swapping results requiring o

768 Dec 24, 2022
Cascaded Pyramid Network (CPN) based on Keras (Tensorflow backend)

ML2 Takehome Project Reimplementing the paper: Cascaded Pyramid Network for Multi-Person Pose Estimation Dataset The model uses the COCO dataset which

Vo Van Tu 1 Nov 22, 2021
Few-shot Neural Architecture Search

One-shot Neural Architecture Search uses a single supernet to approximate the performance each architecture. However, this performance estimation is super inaccurate because of co-adaption among oper

Yiyang Zhao 38 Oct 18, 2022
Pytorch implementation for the Temporal and Object Quantification Networks (TOQ-Nets).

TOQ-Nets-PyTorch-Release Pytorch implementation for the Temporal and Object Quantification Networks (TOQ-Nets). Temporal and Object Quantification Net

Zhezheng Luo 9 Jun 30, 2022
Pyramid Scene Parsing Network, CVPR2017.

Pyramid Scene Parsing Network by Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, Jiaya Jia, details are in project page. Introduction This

Hengshuang Zhao 1.5k Jan 05, 2023
A PyTorch implementation of unsupervised SimCSE

A PyTorch implementation of unsupervised SimCSE

99 Dec 23, 2022
SMD-Nets: Stereo Mixture Density Networks

SMD-Nets: Stereo Mixture Density Networks This repository contains a Pytorch implementation of "SMD-Nets: Stereo Mixture Density Networks" (CVPR 2021)

Fabio Tosi 115 Dec 26, 2022
Code for CVPR2021 paper "Learning Salient Boundary Feature for Anchor-free Temporal Action Localization"

AFSD: Learning Salient Boundary Feature for Anchor-free Temporal Action Localization This is an official implementation in PyTorch of AFSD. Our paper

Tencent YouTu Research 146 Dec 24, 2022
FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection

FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection This repository contains an implementation of FCAF3D, a 3D object detection method introdu

SamsungLabs 153 Dec 29, 2022
An index of recommendation algorithms that are based on Graph Neural Networks.

An index of recommendation algorithms that are based on Graph Neural Networks.

FIB LAB, Tsinghua University 564 Jan 07, 2023
PyTorch3D is FAIR's library of reusable components for deep learning with 3D data

Introduction PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch. Key features include: Data structure for

Facebook Research 6.8k Jan 01, 2023
Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression

Regression Transformer Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression . Development se

International Business Machines 27 Jan 05, 2023
DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation

DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation This repository is the implementation of DynaTune paper. This folder

4 Nov 02, 2022