Deeplearning project at The Technological University of Denmark (DTU) about Neural ODEs for finding dynamics in ordinary differential equations and real world time series data

Overview

Authors

Marcus Lenler Garsdal, [email protected]

Valdemar Søgaard, [email protected]

Simon Moe Sørensen, [email protected]

Introduction

This repo contains the code used for the paper Time series data estimation using Neural ODE in Variational Auto Encoders.

Using pytorch and Neural ODEs (NODEs) it attempts to learn the true dynamics of time series data using toy examples such as clockwise and counterclockwise spirals, and three different examples of sine waves: first a standard non-dampened sine wave, second a dampened sine wave, third an exponentially decaying and dampened sine wave. Finally, the NODE is trained on real world time series data of solar power curves.

The performance of the NODEs are compared to an LSTM VAE baseline on RMSE error and time per epoch.

This project is a purely research and curiosity based project.

Code structure

To make development and research more seamless, an object-oriented approach was taken to improve efficiency and consistency across multiple runs. This also makes it easier to extend and change workflows across multiple models at once.

Source files

The src folder contains the source code. The main components of the source code are:

  • data.py: Data loading object. Primarily uses data generation functions.
  • model.py: Contains model implementations and the abstract TrainerModel class which defines models in the trainer.py file.
  • train.py: A generalized Trainer class used to train subclasses of the TrainerModel class. Moreover, it saves and loads different types of models and handles model visualizations.
  • utils.py: Standard utility functions
  • visualize.py: Visualizes model properties such as reconstructions, loss curves and original data samples

Experiments

In addition, there are three folders for each type of dataset:

  • real/: Contains data for solar power curves and main script for training the solar power model
  • spring/: Generates spring examples and trains spring models
  • toy/: Generates spiral examples and trains spiral models

Each main.py script takes a number of relevant parameters as input to enable parameter tuning, experimentation of different model types, dataset sizes and types. These can be read from the respective files.

Running the code

To run the code use the following code in a terminal with the project root as working directory: python -m src.[dataset].main [--args]

For example: python3 -m src.toy.main --epochs 1000 --freq 100 --num-data 500 --n-total 300 --n-sample 200 --n-skip 1 --latent-dim 4 --hidden-dim 30 --lstm-hidden-dim 45 --lstm-layers 2 --lr 0.001 --solver rk4

Setup environment

Create a new python environment and install the packages from requirements.txt using

pip install -r requirements.txt

Run python notebook

Install Jupyter with pip install jupyter and run a server using jupyter notebook or any supported software such as Anaconda.

Then open run_experiments.ipynb and run the first cell. If the cell succeeds, you should see outputs in experiment/output/png/**

Owner
Simon Moe Sørensen
Studying MSc Business Analytics - Predictive Modelling at DTU
Simon Moe Sørensen
Selene is a Python library and command line interface for training deep neural networks from biological sequence data such as genomes.

Selene is a Python library and command line interface for training deep neural networks from biological sequence data such as genomes.

Troyanskaya Laboratory 323 Jan 01, 2023
DeepOBS: A Deep Learning Optimizer Benchmark Suite

DeepOBS - A Deep Learning Optimizer Benchmark Suite DeepOBS is a benchmarking suite that drastically simplifies, automates and improves the evaluation

Aaron Bahde 7 May 12, 2020
Transfer Learning Shootout for PyTorch's model zoo (torchvision)

pytorch-retraining Transfer Learning shootout for PyTorch's model zoo (torchvision). Load any pretrained model with custom final layer (num_classes) f

Alexander Hirner 169 Jun 29, 2022
An Agnostic Computer Vision Framework - Pluggable to any Training Library: Fastai, Pytorch-Lightning with more to come

IceVision is the first agnostic computer vision framework to offer a curated collection with hundreds of high-quality pre-trained models from torchvision, MMLabs, and soon Pytorch Image Models. It or

airctic 789 Dec 29, 2022
Code Repo for the ACL21 paper "Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning"

Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning This is the Github repository of our paper, "Common S

INK Lab @ USC 19 Nov 30, 2022
Code for C2-Matching (CVPR2021). Paper: Robust Reference-based Super-Resolution via C2-Matching.

C2-Matching (CVPR2021) This repository contains the implementation of the following paper: Robust Reference-based Super-Resolution via C2-Matching Yum

Yuming Jiang 151 Dec 26, 2022
CLDF dataset derived from Robbeets et al.'s "Triangulation Supports Agricultural Spread" from 2021

CLDF dataset derived from Robbeets et al.'s "Triangulation Supports Agricultural Spread" from 2021 How to cite If you use these data please cite the o

Digital Linguistics 2 Dec 20, 2021
JFB: Jacobian-Free Backpropagation for Implicit Models

JFB: Jacobian-Free Backpropagation for Implicit Models

Typal Research 28 Dec 11, 2022
A PyTorch implementation of EventProp [https://arxiv.org/abs/2009.08378], a method to train Spiking Neural Networks

Spiking Neural Network training with EventProp This is an unofficial PyTorch implemenation of EventProp, a method to compute exact gradients for Spiki

Pedro Savarese 35 Jul 29, 2022
Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

PackNet: https://arxiv.org/abs/1711.05769 Pretrained models are available here: https://uofi.box.com/s/zap2p03tnst9dfisad4u0sfupc0y1fxt Datasets in Py

Arun Mallya 216 Jan 05, 2023
A unified 3D Transformer Pipeline for visual synthesis

Overview This is the official repo for the paper: NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion. NÜWA is a unified multimodal p

Microsoft 2.6k Jan 06, 2023
SPT_LSA_ViT - Implementation for Visual Transformer for Small-size Datasets

Vision Transformer for Small-Size Datasets Seung Hoon Lee and Seunghyun Lee and Byung Cheol Song | Paper Inha University Abstract Recently, the Vision

Lee SeungHoon 87 Jan 01, 2023
Official implementation of "Accelerating Reinforcement Learning with Learned Skill Priors", Pertsch et al., CoRL 2020

Accelerating Reinforcement Learning with Learned Skill Priors [Project Website] [Paper] Karl Pertsch1, Youngwoon Lee1, Joseph Lim1 1CLVR Lab, Universi

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 134 Dec 06, 2022
Intro-to-dl - Resources for "Introduction to Deep Learning" course.

Introduction to Deep Learning course resources https://www.coursera.org/learn/intro-to-deep-learning Running on Google Colab (tested for all weeks) Go

Advanced Machine Learning specialisation by HSE 761 Dec 24, 2022
Event-forecasting - Event Forecasting Algorithms With Python

event-forecasting Event Forecasting Algorithms Theory Correlating events in comp

Intellia ICT 4 Feb 15, 2022
Answering Open-Domain Questions of Varying Reasoning Steps from Text

This repository contains the authors' implementation of the Iterative Retriever, Reader, and Reranker (IRRR) model in the EMNLP 2021 paper "Answering Open-Domain Questions of Varying Reasoning Steps

26 Dec 22, 2022
Official code of ICCV2021 paper "Residual Attention: A Simple but Effective Method for Multi-Label Recognition"

CSRA This is the official code of ICCV 2021 paper: Residual Attention: A Simple But Effective Method for Multi-Label Recoginition Demo, Train and Vali

163 Dec 22, 2022
The comma.ai Calibration Challenge!

Welcome to the comma.ai Calibration Challenge! Your goal is to predict the direction of travel (in camera frame) from provided dashcam video. This rep

comma.ai 697 Jan 05, 2023
Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding (CVPR2022)

Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding by Qiaole Dong*, Chenjie Cao*, Yanwei Fu Paper and Supple

Qiaole Dong 190 Dec 27, 2022
Official implementation of the NeurIPS 2021 paper Online Learning Of Neural Computations From Sparse Temporal Feedback

Online Learning Of Neural Computations From Sparse Temporal Feedback This repository is the official implementation of the NeurIPS 2021 paper Online L

Lukas Braun 3 Dec 15, 2021