LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION

Overview

Query Selector

Here you can find code and data loaders for the paper https://arxiv.org/pdf/2107.08687v1.pdf . Query Selector is a novel approach to sparse attention Transformer algorithm that is especially suitable for long term time series forecasting

Depencency

Python            3.7.9
deepspeed         0.4.0
numpy             1.20.3
pandas            1.2.4
scipy             1.6.3
tensorboardX      1.8
torch             1.7.1
torchaudio        0.7.2
torchvision       0.8.2
tqdm              4.61.0

Results on ETT dataset

Univariate

Data Prediction len Informer MSE Informer MAE Trans former MSE Trans former MAE Query Selector MSE Query Selector MAE MSE ratio
ETTh1 24 0.0980 0.2470 0.0548 0.1830 0.0436 0.1616 0.445
ETTh1 48 0.1580 0.3190 0.0740 0.2144 0.0721 0.2118 0.456
ETTh1 168 0.1830 0.3460 0.1049 0.2539 0.0935 0.2371 0.511
ETTh1 336 0.2220 0.3870 0.1541 0.3201 0.1267 0.2844 0.571
ETTh1 720 0.2690 0.4350 0.2501 0.4213 0.2136 0.3730 0.794
ETTh2 24 0.0930 0.2400 0.0999 0.2479 0.0843 0.2239 0.906
ETTh2 48 0.1550 0.3140 0.1218 0.2763 0.1117 0.2622 0.721
ETTh2 168 0.2320 0.3890 0.1974 0.3547 0.1753 0.3322 0.756
ETTh2 336 0.2630 0.4170 0.2191 0.3805 0.2088 0.3710 0.794
ETTh2 720 0.2770 0.4310 0.2853 0.4340 0.2585 0.4130 0.933
ETTm1 24 0.0300 0.1370 0.0143 0.0894 0.0139 0.0870 0.463
ETTm1 48 0.0690 0.2030 0.0328 0.1388 0.0342 0.1408 0.475
ETTm1 96 0.1940 0.2030 0.0695 0.2085 0.0702 0.2100 0.358
ETTm1 288 0.4010 0.5540 0.1316 0.2948 0.1548 0.3240 0.328
ETTm1 672 0.5120 0.6440 0.1728 0.3437 0.1735 0.3427 0.338

Multivariate

Data Prediction len Informer MSE Informer MAE Trans former MSE Trans former MAE Query Selector MSE Query Selector MAE MSE ratio
ETTh1 24 0.5770 0.5490 0.4496 0.4788 0.4226 0.4627 0.732
ETTh1 48 0.6850 0.6250 0.4668 0.4968 0.4581 0.4878 0.669
ETTh1 168 0.9310 0.7520 0.7146 0.6325 0.6835 0.6088 0.734
ETTh1 336 1.1280 0.8730 0.8321 0.7041 0.8503 0.7039 0.738
ETTh1 720 1.2150 0.8960 1.1080 0.8399 1.1150 0.8428 0.912
ETTh2 24 0.7200 0.6650 0.4237 0.5013 0.4124 0.4864 0.573
ETTh2 48 1.4570 1.0010 1.5220 0.9488 1.4074 0.9317 0.966
ETTh2 168 3.4890 1.5150 1.6225 0.9726 1.7385 1.0125 0.465
ETTh2 336 2.7230 1.3400 2.6617 1.2189 2.3168 1.1859 0.851
ETTh2 720 3.4670 1.4730 3.1805 1.3668 3.0664 1.3084 0.884
ETTm1 24 0.3230 0.3690 0.3150 0.3886 0.3351 0.3875 0.975
ETTm1 48 0.4940 0.5030 0.4454 0.4620 0.4726 0.4702 0.902
ETTm1 96 0.6780 0.6140 0.4641 0.4823 0.4543 0.4831 0.670
ETTm1 288 1.0560 0.7860 0.6814 0.6312 0.6185 0.5991 0.586
ETTm1 672 1.1920 0.9260 1.1365 0.8572 1.1273 0.8412 0.946

State Of Art

PWC

PWC

PWC

PWC

PWC

PWC

PWC

PWC

PWC

PWC

Citation

@misc{klimek2021longterm,
      title={Long-term series forecasting with Query Selector -- efficient model of sparse attention}, 
      author={Jacek Klimek and Jakub Klimek and Witold Kraskiewicz and Mateusz Topolewski},
      year={2021},
      eprint={2107.08687},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Contact

If you have any questions please contact us by email - [email protected]

Owner
MORAI
MORAI
Bootstrapped Unsupervised Sentence Representation Learning (ACL 2021)

Install first pip3 install -e . Training python3 training/unsupervised_tuning.py python3 training/supervised_tuning.py python3 training/multilingual_

yanzhang_nlp 26 Jul 22, 2022
Spectral Tensor Train Parameterization of Deep Learning Layers

Spectral Tensor Train Parameterization of Deep Learning Layers This repository is the official implementation of our AISTATS 2021 paper titled "Spectr

Anton Obukhov 12 Oct 23, 2022
ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs

ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs This is the code of paper ConE: Cone Embeddings for Multi-Hop Reasoning over Knowl

MIRA Lab 33 Dec 07, 2022
A Pytorch implementation of MoveNet from Google. Include training code and pre-train model.

Movenet.Pytorch Intro MoveNet is an ultra fast and accurate model that detects 17 keypoints of a body. This is A Pytorch implementation of MoveNet fro

Mr.Fire 241 Dec 26, 2022
GEA - Code for Guided Evolution for Neural Architecture Search

Efficient Guided Evolution for Neural Architecture Search Usage Create a conda e

6 Jan 03, 2023
Training deep models using anime, illustration images.

animeface deep models for anime images. Datasets anime-face-dataset Anime faces collected from Getchu.com. Based on Mckinsey666's dataset. 63.6K image

Tomoya Sawada 61 Dec 25, 2022
Deeprl - Standard DQN and dueling network for simple games

DeepRL This code implements the standard deep Q-learning and dueling network with experience replay (memory buffer) for playing simple games. DQN algo

Yao Zhou 6 Apr 12, 2020
Our implementation used for the MICCAI 2021 FLARE Challenge titled 'Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements'.

Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements Our implementation used for the MICCAI 2021 FLARE C

Franz Thaler 3 Sep 27, 2022
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron,

Pratul Srinivasan 65 Dec 14, 2022
KoCLIP: Korean port of OpenAI CLIP, in Flax

KoCLIP This repository contains code for KoCLIP, a Korean port of OpenAI's CLIP. This project was conducted as part of Hugging Face's Flax/JAX communi

Jake Tae 100 Jan 02, 2023
Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters"

Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters" Pipeline of CLIP-Adapter CLIP-Adapter is a drop-in modul

peng gao 157 Dec 26, 2022
Meshed-Memory Transformer for Image Captioning. CVPR 2020

M²: Meshed-Memory Transformer This repository contains the reference code for the paper Meshed-Memory Transformer for Image Captioning (CVPR 2020). Pl

AImageLab 422 Dec 28, 2022
PyTorch implementation of Glow

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions (https://arxiv.org/abs/1807.03039) Usage: python train.p

Kim Seonghyeon 433 Dec 27, 2022
Model Serving Made Easy

The easiest way to build Machine Learning APIs BentoML makes moving trained ML models to production easy: Package models trained with any ML framework

BentoML 4.4k Jan 08, 2023
Generalized hybrid model for mode-locked laser diodes with an extended passive cavity

GenHybridMLLmodel Generalized hybrid model for mode-locked laser diodes with an extended passive cavity This hybrid simulation strategy combines a tra

Stijn Cuyvers 3 Sep 21, 2022
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.

NNI Doc | 简体中文 NNI (Neural Network Intelligence) is a lightweight but powerful toolkit to help users automate Feature Engineering, Neural Architecture

Microsoft 12.4k Dec 31, 2022
DataCLUE: 国内首个以数据为中心的AI测评(含模型分析报告)

DataCLUE: A Benchmark Suite for Data-centric NLP You can get the english version of README. 以数据为中心的AI测评(DataCLUE) 内容导引 章节 描述 简介 介绍以数据为中心的AI测评(DataCLUE

CLUE benchmark 135 Dec 22, 2022
Pytorch implementations of popular off-policy multi-agent reinforcement learning algorithms, including QMix, VDN, MADDPG, and MATD3.

Off-Policy Multi-Agent Reinforcement Learning (MARL) Algorithms This repository contains implementations of various off-policy multi-agent reinforceme

183 Dec 28, 2022
Implementation of SegNet: A Deep Convolutional Encoder-Decoder Architecture for Semantic Pixel-Wise Labelling

Caffe SegNet This is a modified version of Caffe which supports the SegNet architecture As described in SegNet: A Deep Convolutional Encoder-Decoder A

Alex Kendall 1.1k Jan 02, 2023
L-Verse: Bidirectional Generation Between Image and Text

Far beyond learning long-range interactions of natural language, transformers are becoming the de-facto standard for many vision tasks with their power and scalabilty

Kim, Taehoon 102 Dec 21, 2022