Hierarchical Time Series Forecasting using Prophet

Overview

htsprophet

Hierarchical Time Series Forecasting using Prophet

Credit to Rob J. Hyndman and research partners as much of the code was developed with the help of their work.

https://www.otexts.org/fpp

https://robjhyndman.com/publications/

Credit to Facebook and their fbprophet package.

https://facebookincubator.github.io/prophet/

It was my intention to make some of the code look similar to certain sections in the Prophet and (Hyndman's) hts packages.

Downloading

  1. pip install htsprophet

If you'd like to just skip to coding with the package, runHTS.py should help you with that, but if you like reading, the following should help you understand how I built htsprophet and how it works.

Part I: The Data

I originally used Redfin traffic data to build this package.

I pulled the data so that date was in the first column, my layers were the middle columns, and the number I wanted to forecast was in the last column.

I made a function called makeWeekly() , that rolls up your data into the weekly level. It’s not a necessary function, it was mostly just convenient for me.

So the data looked like this:

Date Platform Medium BusinessMarket Sessions
1100 B.C. Stone Tablet Land Birmingham 23234
... Car Phone Air Auburn 2342
... Sea Evanston 233
... Seattle 445
... 46362

I then ran my orderHier() function with just this dataframe as its input.

NOTE: you cannot run this function if you have more than 4 columns in the middle (in between Date and Sessions for ex.)

To run this function, you specify the data, and how you want your middle columns to be ordered.

So orderHier(data, 2, 1, 3) means you want the second column after date to be the first level of the hierarchy.

Our example would look like this:

Alt text

Date Total Land Air Sea Land_Stone tablet Land_Car Phone Air_Stone tablet
1100 B.C. 24578 23135 555 888 23000 135 550
1099 B.C. 86753 86654 44 55 2342 84312 22
... ... ... ... ... ... ... ...
*All numbers represent the number of sessions for each node in the Hierarchy

If you have more than 4 categorical columns, then you must get the data in this format on your own while also producing the list of lists called nodes

Nodes – describes the structure of the hierarchy.

Here it would equal [[3],[2,2,2],[4,4,4,4,4,4]]

There are 3 nodes in the first level: Land, Air, Sea.

There are 2 children for each of those nodes: Stone tablet, Car phone.

There are 4 business markets for each of those nodes: Tokyo, Hamburg etc.

If you use the orderHier function, nodes will be the second output of the function.

Part II: Prophet Inputs

Anything that you would specify in Prophet you can specify in hts().

It’s flexible and will allow you to input a dataframe of values for inputs like cap, capF, and changepoints.

All of these inputs are specified when you call hts, and after that you just let it run.

The following is the description of inputs and outputs for hts as well as the specified defaults:

Parameters
----------------
 y - dataframe of time-series data
           Layout:
               0th Col - Time instances
               1st Col - Total of TS
               2nd Col - One of the children of the Total TS
               3rd Col - The other child of the Total TS
               ...
               ... Rest of the 1st layer
               ...
               Xth Col - First Child of the 2nd Col
               ...
               ... All of the 2nd Col's Children
               ...
               X+Yth Col - First Child of the 3rd Col
               ...
               ..
               .   And so on...

 h - number of step ahead forecasts to make (int)

 nodes - a list or list of lists of the number of child nodes at each level
 Ex. if the hierarchy is one total with two child nodes that comprise it, the nodes input would be [2]
 
 method – (String)  the type of hierarchical forecasting method that the user wants to use. 
            Options:
            "OLS" - optimal combination using ordinary least squares (Default), 
            "WLSS" - optimal combination using structurally weighted least squares, 
            "WLSV" - optimal combination using variance weighted least squares, 
            "FP" - forcasted proportions (top-down)
            "PHA" - proportions of historical averages (top-down)
            "AHP" - average historical proportions (top-down)
            "BU" - bottom-up (simple addition)
            "CVselect" - select which method is best for you based on 3-fold Cross validation (longer run time)
 
 freq - (Time Frequency) input for the forecasting function of Prophet 
 
 include_history - (Boolean) input for the forecasting function of Prophet
 
 transform - (None or "BoxCox") Do you want to transform your data before fitting the prophet function? If yes, type "BoxCox"
            
 cap - (Dataframe or Constant) carrying capacity of the input time series.  If it is a dataframe, then
                               the number of columns must equal len(y.columns) - 1
                               
 capF - (Dataframe or Constant) carrying capacity of the future time series.  If it is a dataframe, then
                                the number of columns must equal len(y.columns) - 1
 
 changepoints - (DataFrame or List) changepoints for the model to consider fitting. If it is a dataframe, then
                                    the number of columns must equal len(y.columns) - 1
 
 n_changepoints - (constant or list) changepoints for the model to consider fitting. If it is a list, then
                                     the number of items must equal len(y.columns) - 1
 skipFitting - (Boolean) if y is already a dictionary of dataframes, set this to True, and DO NOT run with method = "cvSelect" or transform = "BoxCox"
 
 numThreads - (int) number of threads you want to use when running cvSelect. Note: 14 has shown to decrease runtime by 10 percent 
 
 All other inputs - see Prophet
 
Returns
-----------------
 ynew - a dictionary of DataFrames with predictions, seasonalities and trends that can all be plotted

Don’t forget to specify the frequency if you’re not using daily data.

All other functions should be self-explanatory.

Part III: Room For Improvement

  1. Prediction intervals
Owner
Collin Rooney
Collin Rooney
Python-based implementations of algorithms for learning on imbalanced data.

ND DIAL: Imbalanced Algorithms Minimalist Python-based implementations of algorithms for imbalanced learning. Includes deep and representational learn

DIAL | Notre Dame 220 Dec 13, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learn

Vowpal Wabbit 8.1k Dec 30, 2022
Uber Open Source 1.6k Dec 31, 2022
A Python toolkit for rule-based/unsupervised anomaly detection in time series

Anomaly Detection Toolkit (ADTK) Anomaly Detection Toolkit (ADTK) is a Python package for unsupervised / rule-based time series anomaly detection. As

Arundo Analytics 888 Dec 30, 2022
Built various Machine Learning algorithms (Logistic Regression, Random Forest, KNN, Gradient Boosting and XGBoost. etc)

Built various Machine Learning algorithms (Logistic Regression, Random Forest, KNN, Gradient Boosting and XGBoost. etc). Structured a custom ensemble model and a neural network. Found a outperformed

Chris Yuan 1 Feb 06, 2022
Credit Card Fraud Detection, used the credit card fraud dataset from Kaggle

Credit Card Fraud Detection, used the credit card fraud dataset from Kaggle

Sean Zahller 1 Feb 04, 2022
A unified framework for machine learning with time series

Welcome to sktime A unified framework for machine learning with time series We provide specialized time series algorithms and scikit-learn compatible

The Alan Turing Institute 6k Jan 06, 2023
Land Cover Classification Random Forest

You can perform Land Cover Classification on Satellite Images using Random Forest and visualize the result using Earthpy package. Make sure to install the required packages and such as

Dr. Sander Ali Khowaja 1 Jan 21, 2022
Tools for Optuna, MLflow and the integration of both.

HPOflow - Sphinx DOC Tools for Optuna, MLflow and the integration of both. Detailed documentation with examples can be found here: Sphinx DOC Table of

Telekom Open Source Software 17 Nov 20, 2022
Fit interpretable models. Explain blackbox machine learning.

InterpretML - Alpha Release In the beginning machines learned in darkness, and data scientists struggled in the void to explain them. Let there be lig

InterpretML 5.2k Jan 09, 2023
XGBoost-Ray is a distributed backend for XGBoost, built on top of distributed computing framework Ray.

XGBoost-Ray is a distributed backend for XGBoost, built on top of distributed computing framework Ray.

92 Dec 14, 2022
CyLP is a Python interface to COIN-OR’s Linear and mixed-integer program solvers (CLP, CBC, and CGL)

CyLP CyLP is a Python interface to COIN-OR’s Linear and mixed-integer program solvers (CLP, CBC, and CGL). CyLP’s unique feature is that you can use i

COIN-OR Foundation 161 Dec 14, 2022
Mixing up the Invariant Information clustering architecture, with self supervised concepts from SimCLR and MoCo approaches

Self Supervised clusterer Combined IIC, and Moco architectures, with some SimCLR notions, to get state of the art unsupervised clustering while retain

Bendidi Ihab 9 Feb 13, 2022
Tools for diffing and merging of Jupyter notebooks.

nbdime provides tools for diffing and merging of Jupyter Notebooks.

Project Jupyter 2.3k Jan 03, 2023
The MLOps is the process of continuous integration and continuous delivery of Machine Learning artifacts as a software product, keeping it inside a loop of Design, Model Development and Operations.

MLOps The MLOps is the process of continuous integration and continuous delivery of Machine Learning artifacts as a software product, keeping it insid

Maykon Schots 25 Nov 27, 2022
MaD GUI is a basis for graphical annotation and computational analysis of time series data.

MaD GUI Machine Learning and Data Analytics Graphical User Interface MaD GUI is a basis for graphical annotation and computational analysis of time se

Machine Learning and Data Analytics Lab FAU 10 Dec 19, 2022
A pure-python implementation of the UpSet suite of visualisation methods by Lex, Gehlenborg et al.

pyUpSet A pure-python implementation of the UpSet suite of visualisation methods by Lex, Gehlenborg et al. Contents Purpose How to install How it work

288 Jan 04, 2023
Basic Docker Compose for Machine Learning Purposes

Docker-compose for Machine Learning How to use: cd docker-ml-jupyterlab

Chris Chen 1 Oct 29, 2021
Made in collaboration with Chris George for Art + ML Spring 2019.

Deepdream Eyes Made in collaboration with Chris George for Art + ML Spring 2019.

Francisco Cabrera 1 Jan 12, 2022
An implementation of Relaxed Linear Adversarial Concept Erasure (RLACE)

Background This repository contains an implementation of Relaxed Linear Adversarial Concept Erasure (RLACE). Given a dataset X of dense representation

Shauli Ravfogel 4 Apr 13, 2022