PGPortfolio: Policy Gradient Portfolio, the source code of "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem"(https://arxiv.org/pdf/1706.10059.pdf).

Overview

This is the original implementation of our paper, A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem (arXiv:1706.10059), together with a toolkit of portfolio management research.

  • The deep reinforcement learning framework is the core part of the library. The method is basically the policy gradient on immediate reward. One can configurate the topology, training method or input data in a separate json file. The training process will be recorded and user can visualize the training using tensorboard. Result summary and parallel training are allowed for better hyper-parameters optimization.
  • The financial-model-based portfolio management algorithms are also embedded in this library for comparision purpose, whose implementation is based on Li and Hoi's toolkit OLPS.

Differences from the article version

Note that this library is a part of our main project, and it is several versions ahead of the article.

  • In this version, some technical bugs are fixed and improvements in hyper-parameter tuning and engineering are made.
    • The most important bug in the arxiv v2 article is that the test time-span mentioned is about 30% shorter than the actual experiment. Thus the volumn-observation interval (for asset selection) overlapped with the backtest data in the paper.
  • With new hyper-parameters, users can train the models with smaller time durations.(less than 30 mins)
  • All updates will be incorporated into future versions of the paper.
  • Original versioning history, and internal discussions, including some in-code comments, are removed in this open-sourced edition. These contains our unimplemented ideas, some of which will very likely become the foundations of our future publications

Platform Support

Python 3.5+ in windows and Python 2.7+/3.5+ in linux are supported.

Dependencies

Install Dependencies via pip install -r requirements.txt

  • tensorflow (>= 1.0.0)
  • tflearn
  • pandas
  • ...

User Guide

Please check out User Guide

Acknowledgement

This project would not have been finished without using the codes from the following open source projects:

Community Contribution

We welcome contributions from the community, including but not limited to:

  • Bug fixing
  • Interfacing to other markets such as stock, futures, options
  • Adding broker API (under marketdata)
  • More backtest strategies (under tdagent)

Risk Disclaimer (for Live-trading)

There is always risk of loss in trading. All trading strategies are used at your own risk

The volumes of many cryptocurrency markets are still low. Market impact and slippage may badly affect the results during live trading.

Donation

If you have made some profits because of this project or you just love reading our codes, please consider making a small donation to our ongoing projects via the following BTC or ETH address. All donations will be used as student stipends.

Comments
  • Question about reward function and `__pack_samples`

    Question about reward function and `__pack_samples`

    I'm having trouble reconciling what I read in the paper and what I read in the code.

    The reward function in a single period in the paper (Eq. (10)) is \log(\mu_t y_t \cdot w_{t-1}). But in the code, it seems that the reward is instead \log(mu_t y_{t+1} \cdot w_{t}). Am I correct?

    Because __pack_samples (in datamatrices.py) makes the price tensor X using M[..., :-1] and the relative price vector y using M[...,-1]/M[...,-2], so y is one period ahead of X.

    opened by ziofil 14
  • ValueError during training

    ValueError during training

    Running on Python 3.4.3, after I call python3 main.py --mode=train --processes=1, I get the following error:

    ValueError: the length of selected coins 0 is not equal to expected 11

    Perhaps this an issue with my version of Python?

    opened by jpa99 10
  • online training

    online training

    Hello

    Thanks for the wonderful work, i read your paper and almost studied most of the code. However, i don't get the concept of append_experience and agent train in the rolling_train method I have some questions if i may ask 1- what is the format of the saved experience and how does it affect the model ? 2- how is that different from training the model directly using self._agent.train() ? 3- is the experience mentioned here the same as the mini-batches mentioned in the paper for online learning section 5.3 for example ?

    thanks in advance Sarah Ahmed

    opened by zingomaster 8
  • working config

    working config

    Im trying reproduce result plotted in User Guide (10^2), but with default config getting much worse results. Which config was used in example? Thanks!

    opened by laci84 8
  • ConvLayer Filters

    ConvLayer Filters

    Figure 2 in Paper (Attached image): Shouldn't the convolutional filters be 3 dimensional? I mean, in the original convolution how do we go from 3 feature maps to 2 feature maps. I believe this would make sense if the filter was of dimension 2x1x3 (same as described but with additional depth of 2). And then the second convolution would be 2x48 to get the 20 11x1 feature maps.

    net_config.json: In ConvLayer, I don't understand how {"filter_shape":[1,2],"filter_number":3} corresponds to the filters outlined in the paper as described in my above question. (Excuse my ignorance of tflearn, but the params to conv2d() are not well explained in the documentation)

    image

    opened by LinuxIsCool 7
  • reversed_USDT vs BTC

    reversed_USDT vs BTC

    Hello,

    In the code, i don't understand what is the difference between reversed_USDT and the cash (BTC).

    I supposed (USDT_BTC) which is actually BTC/USD is a mapping to just holding some weight in BTC

    Am i wrong ?

    opened by AhmMontasser 7
  • Backtest trade by strategy, check fees vs coin value update

    Backtest trade by strategy, check fees vs coin value update

    In BackTest, "omega" seems to be the vector wT storing the recommended new portfolio distribution at each step, "_last_omega" the latest/previous portfolio screenhost wT-1. So the system assumes to be able to sell at each step all the current coins of the portfolio and buy all "omega" reco, or at least the delta between omega & last_omega. This strong hypothesis (slippage/liquidity) is in your paper but shouldn't it check at least whether any coin qty adjustment would not cost more transaction fees than the expected value adjustment ?

    opened by doxav 7
  • Poloniex API no longer accessible programmatically

    Poloniex API no longer accessible programmatically

    Looks like the Poloniex API is no longer accessible programmatically. I'll look into alternative APIs and will try to follow up with a pull request for this.

    opened by ielashi 7
  • updated Readme and User Guide

    updated Readme and User Guide

    Hey @ZhengyaoJiang I've updated the readme and user guide to reflect the current version of the library. Please have a look and let me know if I missed anything or if there are other things that need improvement.

    I mostly simplified the explanation and made it clearer where I thought there were ambiguities.

    opened by ghego 6
  • ForwardTest class

    ForwardTest class

    Hi, thank you for your excellent work, this is very interesting stuff.

    I am eager to test this on the live market, but having trouble moving from backtesting to forwardtesting. Any chance that an update with a ForwardTest class is on the way, or that you could advise on how to implement it? I understand it roughly, i.e. the generate_history_matrix( ) function needs to update the datamatrix with the newest market data (with "online" = True in the config file), and return that. And the trade_by_strategy( ) clearly needs a slight rewriting compared to BackTest as we don't know the future price. Any help on how to correctly return the newest market data would be appreciated.

    opened by einarbmag 6
  • Learning procedure

    Learning procedure

    Hello again!

    May I ask here for more details about learning procedure, because I'm not really in shape to understand all the code, may be with your guides here I'll go through it again with more success.

    1. During training phase how many times CNN learns on the same batch? Do you use epochs to learn or CNN passes through the data only once?
    2. During CV and Test phases rolling learning is used. On what data do CNN weights get updated? After all orders have been completed in current period we add price history into local DB. Do we select N periods before current period into learning batch? Or we update weights only using last price window?

    Sorry if it's newbie questions, I just want to understand how this magic works.

    opened by lytkarinskiy 6
  • KeyError: 'BTS_BTC'

    KeyError: 'BTS_BTC'

    Hi,

    I've tried several configurations of my anaconda environment, at first, I managed to make the python main.py --mode=download_data part work, but then I ran into the issues with the update of pandas mentioned in other issues. Trying to fix that I cannot come back to my initial progress even though I've made a new environment and forked the repo once again.

    The error I get is:

    Traceback (most recent call last): File "C:\Users\Alexander.S.Dahlberg\source\repos\PGPortfolio\main.py", line 132, in main() File "C:\Users\Alexander.S.Dahlberg\source\repos\PGPortfolio\main.py", line 71, in main DataMatrices(start=start, File "C:\Users\Alexander.S.Dahlberg\source\repos\PGPortfolio\pgportfolio\marketdata\datamatrices.py", line 44, in init self.__history_manager = gdm.HistoryManager(coin_number=coin_filter, end=self.__end, File "C:\Users\Alexander.S.Dahlberg\source\repos\PGPortfolio\pgportfolio\marketdata\globaldatamatrix.py", line 24, in init self._coin_list = CoinList(end, volume_average_days, volume_forward) File "C:\Users\Alexander.S.Dahlberg\source\repos\PGPortfolio\pgportfolio\marketdata\coinlist.py", line 35, in init prices.append(1.0 / float(ticker[k]['last'])) KeyError: 'BTS_BTC'

    I've no clue how to solve this. Have any others experienced the issue?

    Thanks

    opened by dalle244 4
  • How to run this agent

    How to run this agent

    Hi! i am trying to run your code on Visual Studio 2017, I have downloaded and installed all necessary libraries and dependencies. I attach the main.py file and run it and a console window opens, which I will attach below. I am not native to python so some step by step procedure would be extremely helpful.

    output
    opened by UmairKhalidKhan 0
  • Problem of dtype arguments

    Problem of dtype arguments

    Hello,

    When I run this code:

    python main.py --mode=train --processes=1

    I get this error: TypeError: init() got multiple values for argument 'dtype'

    I changed only the start and end times in the configuration file. Are there any recommendations?

    Here's the logfile:

    INFO:root:select coin online from 2021-10-12 00:00 to 2021-11-11 00:00 DEBUG:root:Selected coins are: ['reversed_USDT', 'reversed_USDC', 'ETH', 'LTC', 'XRP', 'SRM', 'DOGE', 'XMR', 'BCH', 'DOT', 'EOS'] INFO:root:fill SRM data from 2021-03-26 00:00 to 2021-06-24 11:59 INFO:root:fill SRM data from 2021-06-24 12:00 to 2021-09-22 23:59 INFO:root:fill SRM data from 2021-09-23 00:00 to 2021-12-01 00:00 INFO:root:fill DOT data from 2021-03-26 00:00 to 2021-06-24 11:59 INFO:root:fill DOT data from 2021-06-24 12:00 to 2021-09-22 23:59 INFO:root:fill DOT data from 2021-09-23 00:00 to 2021-12-01 00:00 INFO:root:fill EOS data from 2021-11-30 23:00 to 2021-12-01 00:00 INFO:root:feature type list is ['close', 'high', 'low'] DEBUG:root:buffer_bias is 0.000050 INFO:root:the number of training examples is 11008, of test examples is 929 DEBUG:root:the training set is from 0 to 11007 DEBUG:root:the test set is from 11040 to 12000

    Thanks.

    opened by duodenum96 0
  • Normalization on open price

    Normalization on open price

    when you already know I think normalization on open price is incorrect for this task. In real life, you can not buy on open price, when you already know high and low, from my point of view for real testing you should normalize for the close price (open for next candle) - if you do this - results will be significant worst. Have I made a mistake in my reasoning?

    opened by i7p9h9 1
  • Fix: train_summary.csv not generated

    Fix: train_summary.csv not generated

    Hello,

    I was not getting error in train phase however train_summary.csv not generated either. However, when i backtest i was getting error "train_summary.csv not found".

    I found the solution. The problem is related to indentation in tradertrainer.py(__log_result_csv method). At the end of the method replace the lines with indented ones as attached.

    That will produce the necessary csv file. train_summarycsvFIX

    opened by Busy2045 0
Releases(v1.0)
Owner
Zhengyao Jiang
PhD student at UCL, Interested in Deep Learning, Neuro-Symbolic Methods and Reinforcement learning
Zhengyao Jiang
Bayesian Optimization Library for Medical Image Segmentation.

bayesmedaug: Bayesian Optimization Library for Medical Image Segmentation. bayesmedaug optimizes your data augmentation hyperparameters for medical im

Şafak Bilici 7 Feb 10, 2022
Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search

CLIP-GLaSS Repository for the paper Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search An in-browser demo is

Federico Galatolo 172 Dec 22, 2022
Code for the paper "Curriculum Dropout", ICCV 2017

Curriculum Dropout Dropout is a very effective way of regularizing neural networks. Stochastically "dropping out" units with a certain probability dis

Pietro Morerio 21 Jan 02, 2022
LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.

This project is based on ultralytics/yolov3. LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image. The related paper is avai

26 Dec 13, 2022
VoxHRNet - Whole Brain Segmentation with Full Volume Neural Network

VoxHRNet This is the official implementation of the following paper: Whole Brain Segmentation with Full Volume Neural Network Yeshu Li, Jonathan Cui,

Microsoft 12 Nov 24, 2022
A Vision Transformer approach that uses concatenated query and reference images to learn the relationship between query and reference images directly.

A Vision Transformer approach that uses concatenated query and reference images to learn the relationship between query and reference images directly.

24 Dec 13, 2022
Record radiologists' eye gaze when they are labeling images.

Record radiologists' eye gaze when they are labeling images. Read for installation, usage, and deep learning examples. Why use MicEye Versatile As a l

24 Nov 03, 2022
A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).

UniNAS A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS). under development (which happens mostly on our internal Gi

Cognitive Systems Research Group 19 Nov 23, 2022
A little Python application to auto tag your photos with the power of machine learning.

Tag Machine A little Python application to auto tag your photos with the power of machine learning. Report a bug or request a feature Table of Content

Florian Torres 14 Dec 21, 2022
Steerable discovery of neural audio effects

Steerable discovery of neural audio effects Christian J. Steinmetz and Joshua D. Reiss Abstract Applications of deep learning for audio effects often

Christian J. Steinmetz 182 Dec 29, 2022
This is the official PyTorch implementation of our paper: "Artistic Style Transfer with Internal-external Learning and Contrastive Learning".

Artistic Style Transfer with Internal-external Learning and Contrastive Learning This is the official PyTorch implementation of our paper: "Artistic S

51 Dec 20, 2022
Codes for the AAAI'22 paper "TransZero: Attribute-guided Transformer for Zero-Shot Learning"

TransZero [arXiv] This repository contains the testing code for the paper "TransZero: Attribute-guided Transformer for Zero-Shot Learning" accepted to

Shiming Chen 52 Jan 01, 2023
Python Library for Signal/Image Data Analysis with Transport Methods

PyTransKit Python Transport Based Signal Processing Toolkit Website and documentation: https://pytranskit.readthedocs.io/ Installation The library cou

24 Dec 23, 2022
Permute Me Softly: Learning Soft Permutations for Graph Representations

Permute Me Softly: Learning Soft Permutations for Graph Representations

Giannis Nikolentzos 7 Jul 10, 2022
Code release for General Greedy De-bias Learning

General Greedy De-bias for Dataset Biases This is an extention of "Greedy Gradient Ensemble for Robust Visual Question Answering" (ICCV 2021, Oral). T

4 Mar 15, 2022
Dense Gaussian Processes for Few-Shot Segmentation

DGPNet - Dense Gaussian Processes for Few-Shot Segmentation Welcome to the public repository for DGPNet. The paper is available at arxiv: https://arxi

37 Jan 07, 2023
RuleBERT: Teaching Soft Rules to Pre-Trained Language Models

RuleBERT: Teaching Soft Rules to Pre-Trained Language Models (Paper) (Slides) (Video) RuleBERT is a pre-trained language model that has been fine-tune

16 Aug 24, 2022
AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations

AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations. Each modality’s augmentations are contained within its own sub-l

Facebook Research 4.6k Jan 09, 2023
Collision risk estimation using stochastic motion models

collision_risk_estimation Collision risk estimation using stochastic motion models. This is a new approach, based on stochastic models, to predict the

Unmesh 7 Jun 26, 2022
Expressive Power of Invariant and Equivaraint Graph Neural Networks (ICLR 2021)

Expressive Power of Invariant and Equivaraint Graph Neural Networks In this repository, we show how to use powerful GNN (2-FGNN) to solve a graph alig

Marc Lelarge 36 Dec 12, 2022