tsflex is a toolkit for flexible time-series processing & feature extraction, making few assumptions about input data.
Useful links
Installation
If you are using pip, just execute the following command:
pip install tsflex
✨
Why tsflex? - flexible;
- handles multi-variate time-series
- versatile function support
=> integrates natively with many packages for processing (e.g., scipy.signal) & feature extraction (e.g., numpy, scipy.stats) - feature-extraction handles multiple strides & window sizes
- efficient view-based operations
=> extremely low memory peak & fast execution times (see benchmarks) - maintains the time-index of the data
- makes little to no assumptions about the time-series data
Usage
tsflex is built to be intuitive, so we encourage you to copy-paste this code and toy with some parameters!
Series processing
import pandas as pd; import scipy.signal as ssig; import numpy as np
from tsflex.processing import SeriesProcessor, SeriesPipeline
# 1. -------- Get your time-indexed data --------
# Data contains 3 columns; ["ACC_x", "ACC_y", "ACC_z"]
url = "https://github.com/predict-idlab/tsflex/raw/main/examples/data/empatica/acc.parquet"
data = pd.read_parquet(url).set_index("timestamp")
# 2 -------- Construct your processing pipeline --------
processing_pipe = SeriesPipeline(
processors=[
SeriesProcessor(function=np.abs, series_names=["ACC_x", "ACC_y", "ACC_z"]),
SeriesProcessor(ssig.medfilt, ["ACC_x", "ACC_y", "ACC_z"], kernel_size=5) # (with kwargs!)
]
)
# -- 2.1. Append processing steps to your processing pipeline
processing_pipe.append(SeriesProcessor(ssig.detrend, ["ACC_x", "ACC_y", "ACC_z"]))
# 3 -------- Process the data --------
processing_pipe.process(data=data)
Feature extraction
import pandas as pd; import scipy.stats as ssig; import numpy as np
from tsflex.features import FeatureDescriptor, FeatureCollection, NumpyFuncWrapper
# 1. -------- Get your time-indexed data --------
# Data contains 1 column; ["TMP"]
url = "https://github.com/predict-idlab/tsflex/raw/main/examples/data/empatica/tmp.parquet"
data = pd.read_parquet(url).set_index("timestamp")
# 2 -------- Construct your feature collection --------
fc = FeatureCollection(
feature_descriptors=[
FeatureDescriptor(
function=NumpyFuncWrapper(func=ssig.skew, output_names="skew"),
series_name="TMP",
window="5min", # Use 5 minutes
stride="2.5min", # With steps of 2.5 minutes
)
]
)
# -- 2.1. Add features to your feature collection
fc.add(FeatureDescriptor(np.min, "TMP", '2.5min', '2.5min'))
# 3 -------- Calculate features --------
fc.calculate(data=data)
Scikit-learn integration
TODO