More than a hundred strange attractors

Related tags

Deep Learningdysts
Overview

dysts

Analyze more than a hundred chaotic systems.

An embedding of all chaotic systems in the collection

Basic Usage

Import a model and run a simulation with default initial conditions and parameter values

from dysts.flows import Lorenz

model = Lorenz()
sol = model.make_trajectory(1000)
# plt.plot(sol[:, 0], sol[:, 1])

Modify a model's parameter values and re-integrate

model = Lorenz()
model.gamma = 1
model.ic = [0, 0, 0.2]
sol = model.make_trajectory(1000)
# plt.plot(sol[:, 0], sol[:, 1])

Load a precomputed trajectory for the model

eq = Lorenz()
sol = eq.load_trajectory(subsets="test", noise=False, granularity="fine")
# plt.plot(sol[:, 0], sol[:, 1])

Integrate new trajectories from all 131 chaotic systems with a custom granularity

from dysts.base import make_trajectory_ensemble

all_out = make_trajectory_ensemble(100, resample=True, pts_per_period=75)

Load a precomputed collection of time series from all 131 chaotic systems

from dysts.datasets import load_dataset

data = load_dataset(subsets="train", data_format="numpy", standardize=True)

Additional functionality and examples can be found in the demonstrations notebook.. The full API documentation can be found here.

Reference

For additional details, please see the preprint. If using this code for published work, please consider citing the paper.

William Gilpin. "Chaos as an interpretable benchmark for forecasting and data-driven modelling" Advances in Neural Information Processing Systems (NeurIPS) 2021 https://arxiv.org/abs/2110.05266

Installation

Install from PyPI

pip install dysts

To obtain the latest version, including new features and bug fixes, download and install the project repository directly from GitHub

git clone https://github.com/williamgilpin/dysts
cd dysts
pip install -I . 

Test that everything is working

python -m unittest

Alternatively, to use this as a regular package without downloading the full repository, install directly from GitHub

pip install git+git://github.com/williamgilpin/dysts

The key dependencies are

  • Python 3+
  • numpy
  • scipy
  • pandas
  • sdeint (optional, but required for stochastic dynamics)
  • numba (optional, but speeds up generation of trajectories)

These additional optional dependencies are needed to reproduce some portions of this repository, such as benchmarking experiments and estimation of invariant properties of each dynamical system:

  • nolds (used for calculating the correlation dimension)
  • darts (used for forecasting benchmarks)
  • sktime (used for classification benchmarks)
  • tsfresh (used for statistical quantity extraction)
  • pytorch (used for neural network benchmarks)

Contributing

New systems. If you know of any systems should be included, please feel free to submit an issue or pull request. The biggest bottleneck when adding new models is a lack of known parameter values and initial conditions, and so please provide a reference or code that contains all parameter values necessary to reproduce the claimed dynamics. Because there are an infinite number of chaotic systems, we currently are only including systems that have appeared in published work.

Development and Maintainence. We are very grateful for any suggestions or contributions. See the to-do list below for some of the ongoing work.

Benchmarks

The benchmarks reported in our preprint can be found in benchmarks. An overview of the contents of the directory can be found in BENCHMARKS.md, while individual task areas are summarized in corresponding Jupyter Notebooks within the top level of the directory.

Contents

  • Code to generate benchmark forecasting and training experiments are included in benchmarks
  • Pre-computed time series with training and test partitions are included in data
  • The raw definitions metadata for all chaotic systems are included in the database file chaotic_attractors. The Python implementations of differential equations can be found in the flows module

Implementation Notes

  • Currently there are 131 continuous time models, including several delay diffential equations. There is also a separate module with 10 discrete maps, which is currently being expanded.
  • The right hand side of each dynamical equation is compiled using numba, wherever possible. Ensembles of trajectories are vectorized where needed.
  • Attractor names, default parameter values, references, and other metadata are stored in parseable JSON database files. Parameter values are based on standard or published values, and default initial conditions were generated by running each model until the moments of the autocorrelation function all become stationary.
  • The default integration step is stored in each continuous-time model's dt field. This integration timestep was chosen based on the highest significant frequency observed in the power spectrum, with significance being determined relative to random phase surrogates. The period field contains the timescale associated with the dominant frequency in each system's power spectrum. When using the model.make_trajectory() method with the optional setting resample=True, integration is performed at the default dt. The integrated trajectory is then resampled based on the period. The resulting trajectories will have have consistant dominant timescales across models, despite having different integration timesteps.

Acknowledgements

  • Two existing collections of named systems can be found on the webpages of Jürgen Meier and J. C. Sprott. The current version of dysts contains all systems from both collections.
  • Several of the analysis routines (such as calculation of the correlation dimension) use the library nolds. If re-using the fractal dimension code that depends on nolds, please be sure to credit that library and heed its license. The Lyapunov exponent calculation is based on the QR factorization approach used by Wolf et al 1985 and Eckmann et al 1986, with implementation details adapted from conventions in the Julia library DynamicalSystems.jl

Ethics & Reporting

Dataset datasheets and metadata are reported using the dataset documentation guidelines described in Gebru et al 2018; please see our preprint for a full dataset datasheet and other information. We note that all datasets included here are mathematical in nature, and do not contain human or clinical observations. If any users become aware of unintended harms that may arise due to the use of this data, we encourage reporting them by submitting an issue on this repository.

Development to-do list

A partial list of potential improvements in future versions

  • Speed up the delay equation implementation
    • We need to roll our own implementation of DDE23 in the utils module.
  • Improve calculations of Lyapunov exponents for delay systems
  • Implement multivariate multiscale entropy and re-calculate for all attractors
  • Add a method for parallel integrating multiple systems at once, based on a list of names and a set of shared settings
    • Can use multiprocessing for a few systems, but greater speedups might be possible by compiling all right hand sides into a single function acting on a large vector.
    • Can also use this same utility to integrate multiple initial conditions for the same model
  • Add a separate jacobian database file, and add an attribute that can be used to check if an analytical one exists. This will speed up numerical integration, as well as potentially aid in calculating Lyapunov exponents.
  • Align the initial phases, potentially by picking default starting initial conditions that lie on the attractor, but which are as close as possible to the origin
  • Expand and finalize the discrete dysts.maps module
    • Maps are deterministic but not differentiable, and so not all analysis methods will work on them. Will probably need a decorator to declare whether utilities work on flows, maps, or both
  • Switch stochastic integration to a newer package, like torchsde or sdepy
Owner
William Gilpin
Physics researcher at Harvard. Soon @GilpinLab at UT Austin
William Gilpin
A graphical Semi-automatic annotation tool based on labelImg and Yolov5

💕YOLOV5 semi-automatic annotation tool (Based on labelImg)

EricFang 247 Jan 05, 2023
Stacs-ci - A set of modules to enable integration of STACS with commonly used CI / CD systems

Static Token And Credential Scanner CI Integrations What is it? STACS is a YARA

STACS 18 Aug 04, 2022
Pytorch implementation of Decoupled Spatial-Temporal Transformer for Video Inpainting

Decoupled Spatial-Temporal Transformer for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu Sun, Xiaogang Wang, J

51 Dec 13, 2022
Code for Contrastive-Geometry Networks for Generalized 3D Pose Transfer

Code for Contrastive-Geometry Networks for Generalized 3D Pose Transfer

18 Jun 28, 2022
Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming

Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification.

YerevaNN 75 Nov 06, 2022
LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection

LiDAR Distillation Paper | Model LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection Yi Wei, Zibu Wei, Yongming Rao, Jiax

Yi Wei 75 Dec 22, 2022
The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".

Cutoff: A Simple Data Augmentation Approach for Natural Language This repository contains source code necessary to reproduce the results presented in

Dinghan Shen 49 Dec 22, 2022
Human pose estimation from video plays a critical role in various applications such as quantifying physical exercises, sign language recognition, and full-body gesture control.

Pose Detection Project Description: Human pose estimation from video plays a critical role in various applications such as quantifying physical exerci

Hassan Shahzad 2 Jan 17, 2022
「PyTorch Implementation of AnimeGANv2」を用いて、生成した顔画像を元の画像に上書きするデモ

AnimeGANv2-Face-Overlay-Demo PyTorch Implementation of AnimeGANv2を用いて、生成した顔画像を元の画像に上書きするデモです。

KazuhitoTakahashi 21 Oct 18, 2022
TorchGRL is the source code for our paper Graph Convolution-Based Deep Reinforcement Learning for Multi-Agent Decision-Making in Mixed Traffic Environments for IV 2022.

TorchGRL TorchGRL is the source code for our paper Graph Convolution-Based Deep Reinforcement Learning for Multi-Agent Decision-Making in Mixed Traffi

XXQQ 42 Dec 09, 2022
Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

OFA Sys 1.4k Jan 08, 2023
PyTorch implementation of the YOLO (You Only Look Once) v2

PyTorch implementation of the YOLO (You Only Look Once) v2 The YOLOv2 is one of the most popular one-stage object detector. This project adopts PyTorc

申瑞珉 (Ruimin Shen) 433 Nov 24, 2022
Code for our paper 'Generalized Category Discovery'

Generalized Category Discovery This repo is a placeholder for code for our paper: Generalized Category Discovery Abstract: In this paper, we consider

107 Dec 28, 2022
Spectral Tensor Train Parameterization of Deep Learning Layers

Spectral Tensor Train Parameterization of Deep Learning Layers This repository is the official implementation of our AISTATS 2021 paper titled "Spectr

Anton Obukhov 12 Oct 23, 2022
Problem-943.-ACMP - Problem 943. ACMP

Problem-943.-ACMP В "main.py" расположен вариант моего решения задачи 943 с серв

Konstantin Dyomshin 2 Aug 19, 2022
FedScale: Benchmarking Model and System Performance of Federated Learning

FedScale: Benchmarking Model and System Performance of Federated Learning (Paper) This repository contains scripts and instructions of building FedSca

268 Jan 01, 2023
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight

dimensions Estimating the instrinsic dimensionality of image datasets Code for: The Intrinsic Dimensionaity of Images and Its Impact On Learning - Phi

Phil Pope 41 Dec 10, 2022
Meta Language-Specific Layers in Multilingual Language Models

Meta Language-Specific Layers in Multilingual Language Models This repo contains the source codes for our paper On Negative Interference in Multilingu

Zirui Wang 20 Feb 13, 2022
Solving SMPL/MANO parameters from keypoint coordinates.

Minimal-IK A simple and naive inverse kinematics solver for MANO hand model, SMPL body model, and SMPL-H body+hand model. Briefly, given joint coordin

Yuxiao Zhou 305 Dec 30, 2022
Code for "Long-tailed Distribution Adaptation"

Long-tailed Distribution Adaptation (Accepted in ACM MM2021) This project is built upon BBN. Installation pip install -r requirements.txt Usage Traini

Zhiliang Peng 10 May 18, 2022