A python library for time-series smoothing and outlier detection in a vectorized way.

Overview

tsmoothie

A python library for time-series smoothing and outlier detection in a vectorized way.

Overview

tsmoothie computes, in a fast and efficient way, the smoothing of single or multiple time-series.

The smoothing techniques available are:

  • Exponential Smoothing
  • Convolutional Smoothing with various window types (constant, hanning, hamming, bartlett, blackman)
  • Spectral Smoothing with Fourier Transform
  • Polynomial Smoothing
  • Spline Smoothing of various kind (linear, cubic, natural cubic)
  • Gaussian Smoothing
  • Binner Smoothing
  • LOWESS
  • Seasonal Decompose Smoothing of various kind (convolution, lowess, natural cubic spline)
  • Kalman Smoothing with customizable components (level, trend, seasonality, long seasonality)

tsmoothie provides the calculation of intervals as result of the smoothing process. This can be useful to identify outliers and anomalies in time-series.

In relation to the smoothing method used, the interval types available are:

  • sigma intervals
  • confidence intervals
  • predictions intervals
  • kalman intervals

tsmoothie can carry out a sliding smoothing approach to simulate an online usage. This is possible splitting the time-series into equal sized pieces and smoothing them independently. As always, this functionality is implemented in a vectorized way through the WindowWrapper class.

tsmoothie can operate time-series bootstrap through the BootstrappingWrapper class.

The supported bootstrap algorithms are:

  • none overlapping block bootstrap
  • moving block bootstrap
  • circular block bootstrap
  • stationary bootstrap

Media

Blog Posts:

Installation

pip install --upgrade tsmoothie

The module depends only on NumPy, SciPy and simdkalman. Python 3.6 or above is supported.

Usage: smoothing

Below a couple of examples of how tsmoothie works. Full examples are available in the notebooks folder.

# import libraries
import numpy as np
import matplotlib.pyplot as plt
from tsmoothie.utils_func import sim_randomwalk
from tsmoothie.smoother import LowessSmoother

# generate 3 randomwalks of lenght 200
np.random.seed(123)
data = sim_randomwalk(n_series=3, timesteps=200, 
                      process_noise=10, measure_noise=30)

# operate smoothing
smoother = LowessSmoother(smooth_fraction=0.1, iterations=1)
smoother.smooth(data)

# generate intervals
low, up = smoother.get_intervals('prediction_interval')

# plot the smoothed timeseries with intervals
plt.figure(figsize=(18,5))

for i in range(3):
    
    plt.subplot(1,3,i+1)
    plt.plot(smoother.smooth_data[i], linewidth=3, color='blue')
    plt.plot(smoother.data[i], '.k')
    plt.title(f"timeseries {i+1}"); plt.xlabel('time')

    plt.fill_between(range(len(smoother.data[i])), low[i], up[i], alpha=0.3)

Randomwalk Smoothing

# import libraries
import numpy as np
import matplotlib.pyplot as plt
from tsmoothie.utils_func import sim_seasonal_data
from tsmoothie.smoother import DecomposeSmoother

# generate 3 periodic timeseries of lenght 300
np.random.seed(123)
data = sim_seasonal_data(n_series=3, timesteps=300, 
                         freq=24, measure_noise=30)

# operate smoothing
smoother = DecomposeSmoother(smooth_type='lowess', periods=24,
                             smooth_fraction=0.3)
smoother.smooth(data)

# generate intervals
low, up = smoother.get_intervals('sigma_interval')

# plot the smoothed timeseries with intervals
plt.figure(figsize=(18,5))

for i in range(3):
    
    plt.subplot(1,3,i+1)
    plt.plot(smoother.smooth_data[i], linewidth=3, color='blue')
    plt.plot(smoother.data[i], '.k')
    plt.title(f"timeseries {i+1}"); plt.xlabel('time')

    plt.fill_between(range(len(smoother.data[i])), low[i], up[i], alpha=0.3)

Sinusoidal Smoothing

All the available smoothers are fully integrable with sklearn (see here).

Usage: bootstrap

# import libraries
import numpy as np
import matplotlib.pyplot as plt
from tsmoothie.utils_func import sim_seasonal_data
from tsmoothie.smoother import ConvolutionSmoother
from tsmoothie.bootstrap import BootstrappingWrapper

# generate a periodic timeseries of lenght 300
np.random.seed(123)
data = sim_seasonal_data(n_series=1, timesteps=300, 
                         freq=24, measure_noise=15)

# operate bootstrap
bts = BootstrappingWrapper(ConvolutionSmoother(window_len=8, window_type='ones'), 
                           bootstrap_type='mbb', block_length=24)
bts_samples = bts.sample(data, n_samples=100)

# plot the bootstrapped timeseries
plt.figure(figsize=(13,5))
plt.plot(bts_samples.T, alpha=0.3, c='orange')
plt.plot(data[0], c='blue', linewidth=2)

Sinusoidal Bootstrap

References

  • Polynomial, Spline, Gaussian and Binner smoothing are carried out building a regression on custom basis expansions. These implementations are based on the amazing intuitions of Matthew Drury available here
  • Time Series Modelling with Unobserved Components, Matteo M. Pelagatti
  • Bootstrap Methods in Time Series Analysis, Fanny Bergström, Stockholms universitet
Comments
  • Question on KalmanSmoother usage

    Question on KalmanSmoother usage

    Hi, I have a time-series that has seasonality at certain time windows (lets call it sw) and no seasonality at other windows (lets call it nsw). I plan to pass random windows of this time-series into the smoother.

    I am trying to use KalmanSmoother and is considering between:

    smoother1 = ts.smoother.KalmanSmoother(component='level_trend_season', 
                                           component_noise={'level':0.1, 'trend':0.1, 'season':0.1})
    
    vs
    
    smoother2 = ts.smoother.KalmanSmoother(component='level_trend', 
                                           component_noise={'level':0.1, 'trend':0.1})
    

    If the random window slice is sw, the smoother1 should work just fine, and at nsw cases, smoother2 should work better. However I can only use one smoother.

    My question is if I pass nsw into smoother1, will it degrade performance as compared to if pass nsw to smoother2? Is the smoother1 smart enough to "ignore" the fact that nsw has no seasonality in its time-series?

    opened by turmeric-blend 5
  • enhance for tsmoothie to be applicable for inputs with multiple dimensions

    enhance for tsmoothie to be applicable for inputs with multiple dimensions

    Hi, thanks for this library.

    Is it possible to vectorize across multiple dimensions? So a generic N dimensions (..., ..., ..., ..., , timesteps), currently it is limited to (series, timesteps). This would be useful to apply to multivariate time-series problems as well as deep learning applications where there is a batch_size. This should be fairly straight forward using PyTorch (actually even doable with numpy). Would there be a computation limitation?

    opened by turmeric-blend 4
  • question

    question

    Hi Marco

    First thank you for your python package !

    Among all the smoother of the package which one is casual ? or are they all no casual ?

    Regards Ludo

    opened by LinuxpowerLudo 4
  • Numpy rounding issue causes NaN array on Lowess prediction results

    Numpy rounding issue causes NaN array on Lowess prediction results

    Marco, thanks for the excellent project! You've made a great effort combining all the smoothing theories in one single, easy-to-use library! I couldn't thank you enough!

    I stumbled upon a rounding math problem today on the "prediction_interval" function. This problem is actually not on your code, but instead on how Numpy chooses to round floating numbers on the numpy.sum method:

    For floating point numbers the numerical precision of sum (and np.add.reduce) is in general limited by directly adding each number individually to the result causing rounding errors in every step. However, often numpy will use a numerically better approach (partial pairwise summation) leading to improved precision in many use-cases. This improved precision is always provided when no axis is given. When axis is given, it will depend on which axis is summed. Technically, to provide the best speed possible, the improved precision is only used when the summation is along the fast axis in memory. Note that the exact precision may vary depending on other parameters. In contrast to NumPy, Python’s math.fsum function uses a slower but more precise approach to summation. Especially when summing a large number of lower precision floating point numbers, such as float32, numerical errors can become significant. In such cases it can be advisable to use dtype=”float64” to use a higher precision for the output.

    This only occurred with a very particular set of numbers while using the LowessSmoother, which ended up with a negative value that caused an excepetion on the square root later on:

    mse = (np.square(resid).sum(axis=1, keepdims=True) / (N - d_free)).T .... predstd = np.sqrt(predvar).T

    tsmoothie\utils_func.py:306: RuntimeWarning: invalid value encountered in sqrt

    Quick solution first:

    mse = (np.square(resid).sum(axis=1, dtype="float64", keepdims=True) / (N - d_free)).T

    Adding the dtype parameter solved the problem. This causes numpy to increase rounding precision (as stated above) which ended up giving me the correct result.

    Quick observation: As yet, I'm not quite sure on how adding dtype might affect speed and performance on all the other smoother methods, but I will have to check on this eventually.

    Explanation and info:

    While calling Lowess Smother method, setting the iterations parameter to any value greater than 1 caused the rounding numpy problem on the following set of data:

    data[6318, 36871, 39933, 22753, 9680, 6503, 4032, 2733, 2807, 2185, 1866, 1800, 1907, 1537, 1357, 1221, 1354, 1514, 2021, 11110, 17656, 17397, 24385, 22361, 18709, 20201, 20245, 25767, 21345, 18928, 20958, 20425, 23066, 20221, 18756, 17403, 17843, 21201, 25867, 17342, 16815, 5700, 25897, 20891, 20022, 22291, 24334, 21304, 25328, 22201, 20308, 21539, 29637, 22740, 19510, 18959, 21160, 23520, 20574, 16519, 18779]

    Problem occurs at data[-3]. The problem doesn't occur when I cut the rest out:

    data[0:len(data)-3]

    At that point, the total sum causes numpy's rounding to go berserk, I imagine.

    This ends up calling the square root exception above, which in turn causes your "prediction_interval" function to return an array of NaN results:

    [[nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan]] [[nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan]]

    Variable "mse" without dtype outputs: [[-35780673.18644068]]

    And after including dtype: [[35780673.18644068]]

    Other important parameters I used to help you check this were: Prediction: prediction_interval Confidence: 0.05 Smooth Fraction: 0.3 Batch Size: None Didn't use a WindowWrapper

    For this project I'm stuck with iterations between 5 and 6 and no Batch Size, I need to smooth the entire data together.

    By the way, I think there something going on with the batch_size parameter also, but I haven't got time to look at it yet.

    Thanks again for the great project!! Keep up the good work!!

    opened by brunom63 3
  • sklearn api

    sklearn api

    Would you consider the possibility of making it compatible with sklearn using fit and transform instead of smooth? Is there a specific reason why you save the transformed data as an instance attribute? (this would be against the sklearn API)

    I am thinking of doing it myself for a project I am working on but I wanted to ask you first if I missed anything obvious that would make this difficult or not possible.

    Many thanks

    opened by gioxc88 3
  • Interoperability with sktime

    Interoperability with sktime

    Hi,

    Rather a discussion point than issue: I just saw your post on https://github.com/MaxBenChrist/awesome_time_series_in_python/issues/31 and I'd love to make sktime easily inter-operable with tsmoothie. Would you be interested in working on that?

    opened by mloning 3
  • About component_noise in Kalman filter

    About component_noise in Kalman filter

    Hi Marco I am new to Tsmootie and also Kalman filters. In a process to understand. Have a doubt about component_noise. I have time series where daily seasonality is prominent. So mostly component noise: season= 0.1 works well (low sigma value as I am confident about daily seasonality). But I have tried values like 0.01 and 1 also for the same. I want to know is there any valid limits/ range for the sigma values of component_noise? i.e. 0 to 1 (0 to 100%) etc.

    opened by tawdes 2
  • Is there a way to extend the model past the data?

    Is there a way to extend the model past the data?

    This is a great library, thanks a lot! I have a question, is there a way to extend the smooth/CI past the data domain? See the below plot, aesthetically I would like the smooth regions to go to the edges of the graph region....

    image
    opened by parksj10 2
  • Which smoother is the best to detect and remove outliers?

    Which smoother is the best to detect and remove outliers?

    Hi Marco! Thank you for an awesome package!

    I have a quick question for you. Since you're obviously well-rehearsed in time-series smoothing, which particular smoother will you recommend as a default option?

    In particular, I have a training series y_train (which is potentially very short, <50 observations), and I use some univariate forecasting model to forecast H-periods ahead, resulting in an H-dim vector y_hat. Since my training vector is not always very long, some flexible methods give me crazy results for y_hat, which I want to reset to some sensible value.

    I could do, for instance,

    # Instantiate smoother
    smoother = ConvolutionSmoother(window_len=0.1*len(y_train), window_type='ones')
    smoother.smooth(pd.concat([y_train, y_hat], axis=0)
            
    # Get threshold
    threshold_lower, threshold_upper = smoother.get_intervals('sigma_interval', n_sigma=2)
            
    # Subset to match length
    threshold_lower = threshold_lower[0,-len(y_hat):]
    threshold_upper = threshold_upper[0,-len(y_hat):]
    

    and then use these thresholds. Do you have any recommendations in this setup?

    opened by muhlbach 2
  • Anomaly inference from smoothed data

    Anomaly inference from smoothed data

    Thanks for developing this library. This is a pretty interesting one. I have a question when using tsmoothie as follows.

    Currently I am using an (unsupervised) clustering method to create a model once on a large amount of data (that, assigns inlier and outlier labels) and then query the model repeatedly with small amounts of new data to predict the label (to infer anomaly).

    I am planning to use tsmoothie for filtering the noise in the large input data which will be subject to clustering to assign inlier and outlier labels . Later when I use new data points for predicting the normal or anomaly label, I should smooth that also before prediction. Is that correct?

    opened by nsankar 2
  • WindowWrapper behavior with ExponentialSmoother

    WindowWrapper behavior with ExponentialSmoother

    When I use the WindowWrapper in combination with LowessSmoother, like in the notebook example, I obtain the desired output (NxM numpy array, where N=samples and M=window size). However, when I use WindowWrapper with ExponentialSmoother i get a Nx1 numpy array.

    Is this because ExponentialSmoother is an online-ready algorithm?

    code: https://ibb.co/GdBtWJv

    opened by meneghet 2
Releases(v1.0.4)
Owner
Marco Cerliani
Statistician Hacker & Data Scientist
Marco Cerliani
The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp.

PISE The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp. Requirement conda create -n pise pyt

jinszhang 110 Nov 21, 2022
Contains code for the paper "Vision Transformers are Robust Learners".

Vision Transformers are Robust Learners This repository contains the code for the paper Vision Transformers are Robust Learners by Sayak Paul* and Pin

Sayak Paul 103 Jan 05, 2023
ZEBRA: Zero Evidence Biometric Recognition Assessment

ZEBRA: Zero Evidence Biometric Recognition Assessment license: LGPLv3 - please reference our paper version: 2020-06-11 author: Andreas Nautsch (EURECO

Voice Privacy Challenge 2 Dec 12, 2021
Junction Tree Variational Autoencoder for Molecular Graph Generation (ICML 2018)

Junction Tree Variational Autoencoder for Molecular Graph Generation Official implementation of our Junction Tree Variational Autoencoder https://arxi

Wengong Jin 418 Jan 07, 2023
Custom Implementation of Non-Deep Networks

ParNet Custom Implementation of Non-deep Networks arXiv:2110.07641 Ankit Goyal, Alexey Bochkovskiy, Jia Deng, Vladlen Koltun Official Repository https

Pritama Kumar Nayak 20 May 27, 2022
The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines"

MangaLineExtraction_PyTorch The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines" Usage model_torch.py [sourc

Miaomiao Li 82 Jan 02, 2023
Official Pytorch Implementation of Relational Self-Attention: What's Missing in Attention for Video Understanding

Relational Self-Attention: What's Missing in Attention for Video Understanding This repository is the official implementation of "Relational Self-Atte

mandos 43 Dec 07, 2022
This is the official implementation of TrivialAugment and a mini-library for the application of multiple image augmentation strategies including RandAugment and TrivialAugment.

Trivial Augment This is the official implementation of TrivialAugment (https://arxiv.org/abs/2103.10158), as was used for the paper. TrivialAugment is

AutoML-Freiburg-Hannover 94 Dec 30, 2022
Differentiable Quantum Chemistry (only Differentiable Density Functional Theory and Hartree Fock at the moment)

DQC: Differentiable Quantum Chemistry Differentiable quantum chemistry package. Currently only support differentiable density functional theory (DFT)

75 Dec 02, 2022
An open source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+. Including offline map and navigation.

Pi Zero Bikecomputer An open-source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+ https://github.com/hishizuka/pizero_bikecompute

hishizuka 264 Jan 02, 2023
Implementation of OpenAI paper with Simple Noise Scale on Fastai V2

README Implementation of OpenAI paper "An Empirical Model of Large-Batch Training" for Fastai V2. The code is based on the batch size finder implement

13 Dec 10, 2021
TAP: Text-Aware Pre-training for Text-VQA and Text-Caption, CVPR 2021 (Oral)

TAP: Text-Aware Pre-training TAP: Text-Aware Pre-training for Text-VQA and Text-Caption by Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Flo

Microsoft 61 Nov 14, 2022
This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).

NeurIPS 2021 (Spotlight): Task-Adaptive Neural Network Search with Meta-Contrastive Learning This is an official PyTorch implementation of Task-Adapti

Wonyong Jeong 15 Nov 21, 2022
Python utility to generate filesystem content for Obsidian.

Security Vault Generator Quickly parse, format, and output common frameworks/content for Obsidian.md. There is a strong focus on MITRE ATT&CK because

Justin Angel 73 Dec 02, 2022
Individual Tree Crown classification on WorldView-2 Images using Autoencoder -- Group 9 Weak learners - Final Project (Machine Learning 2020 Course)

Created by Olga Sutyrina, Sarah Elemili, Abduragim Shtanchaev and Artur Bille Individual Tree Crown classification on WorldView-2 Images using Autoenc

2 Dec 08, 2022
YOLOv5 + ROS2 object detection package

YOLOv5-ROS YOLOv5 + ROS2 object detection package This program changes the input of detect.py (ultralytics/yolov5) to sensor_msgs/Image of ROS2. Requi

Ar-Ray 23 Dec 19, 2022
SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning

Datasets | Website | Raw Data | OpenReview SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning Christopher

67 Dec 17, 2022
Implementation of Nyström Self-attention, from the paper Nyströmformer

Nyström Attention Implementation of Nyström Self-attention, from the paper Nyströmformer. Yannic Kilcher video Install $ pip install nystrom-attention

Phil Wang 95 Jan 02, 2023
TreeSubstitutionCipher - Encryption system based on trees and substitution

Tree Substitution Cipher Generation Algorithm: Generate random tree. Tree nodes

stepa 1 Jan 08, 2022
PatchMatch-RL: Deep MVS with Pixelwise Depth, Normal, and Visibility

PatchMatch-RL: Deep MVS with Pixelwise Depth, Normal, and Visibility Jae Yong Lee, Joseph DeGol, Chuhang Zou, Derek Hoiem Installation To install nece

31 Apr 19, 2022