This repository contains a PyTorch implementation of the paper Learning to Assimilate in Chaotic Dynamical Systems.

Overview

Amortized Assimilation

This repository contains a PyTorch implementation of the paper Learning to Assimilate in Chaotic Dynamical Systems.

Abstract: The accuracy of simulation-based forecasting in chaotic systems is heavily dependent on high-quality estimates of the system state at the time the forecast is initialized. Data assimilation methods are used to infer these initial conditions by systematically combining noisy, incomplete observations and numerical models of system dynamics to produce effective estimation schemes. We introduce amortized assimilation, a framework for learning to assimilate in dynamical systems from sequences of noisy observations with no need for ground truth data. We motivate the framework by extending powerful results from self-supervised denoising to the dynamical systems setting through the use of differentiable simulation.

Installation

Requirements

This code can be memory heavy as each experiment unrolls at least 40 assimilation steps (which from a memory perspective is equivalent to a 40x deeper network plus whatever is needed for the simulation). Current settings are optimized to max out memory usage on a GTX1070 GPU. The easiest ways to tune memory usage are network width and ensemble size. Checkpointing could significantly improve memory utilization but is not currently implemented.

To install the dependencies, use the provided requirements.txt file:

pip install -r requirements.txt 

There is also a dependency on torchdiffeq. Instructions for installing torchdiffeq can be found at https://github.com/rtqichen/torchdiffeq, but are also copied below:

pip install git+https://github.com/rtqichen/torchdiffeq

To run the DA comparison models, you will need to install DAPPER. Instructions can be found here: https://github.com/nansencenter/DAPPER.

Installing this package

A setup.py file has been included for installation. Navigate to the home folder and run:

pip install -e . 

Run experiments

All experiments can be run from experiments/run_*.py. Default settings are those used in the paper. First navigate to the experiments directory then execute:

L96 Full Observations

python run_L96Conv.py --obs_conf full_obs

L96 Partial Observations (every fourth).

python run_L96Conv.py --obs_conf every_4th_dim_partial_obs

VL20 Partial

python run_VLConv.py --obs_conf every_4th_dim_partial_obs

KS Full

python run_KS.py 

Other modifications of interest might be to adjust the step size for the integrator (--step_size, default .1), observation error(--noise, default 1.), ensemble size (--m, default 10), or network width (--hidden_size, default 64 for conv). The L96 code also includes options for self-supervised and supervised analysis losses (ss_analysis, clean_analysis) used for creating Figure 6 from the paper. Custom observation operators can be created in the same style as those found in obs_configs.py.

Parameters for traditional DA approaches were tuned via grid search over smaller sequences. Those hyperparameters were then used for longer assimilation sequences.

To test a new architecture, you'll want to ensure it's obeying the same API as the models in models.py, but otherwise it should slot in without major issues.

Datasets

Code is included for generating the Lorenz 96, VL 20 and KS datasets. This can be found under amortized_assimilation/data_utils.py

References

DAPPER: Raanes, P. N., & others. (2018). nansencenter/DAPPER: Version 0.8. https://doi.org/10.5281/zenodo.2029296

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grant No. 1835825. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.


If you found the code or ideas in this repository useful, please consider citing:

@article{mccabe2021l2assim,
  title={Learning to Assimilate in Chaotic Dynamical Systems},
  author={McCabe, Michael and Brown, Jed},
  journal={Advances in Neural Information Processing Systems},
  year={2021}
}
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning

GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.

129 Dec 30, 2022
Kaggle | 9th place (part of) solution for the Bristol-Myers Squibb – Molecular Translation challenge

Part of the 9th place solution for the Bristol-Myers Squibb – Molecular Translation challenge translating images containing chemical structures into I

Erdene-Ochir Tuguldur 22 Nov 30, 2022
Code for NeurIPS 2021 paper 'Spatio-Temporal Variational Gaussian Processes'

Spatio-Temporal Variational GPs This repository is the official implementation of the methods in the publication: O. Hamelijnck, W.J. Wilkinson, N.A.

AaltoML 26 Sep 16, 2022
Exploiting Robust Unsupervised Video Person Re-identification

Exploiting Robust Unsupervised Video Person Re-identification Implementation of the proposed uPMnet. For the preprint, please refer to [Arxiv]. Gettin

1 Apr 09, 2022
performing moving objects segmentation using image processing techniques with opencv and numpy

Moving Objects Segmentation On this project I tried to perform moving objects segmentation using background subtraction technique. the introduced meth

Mohamed Magdy 15 Dec 12, 2022
[ICCV'21] Official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations

CrowdNav with Social-NCE This is an official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations by

VITA lab at EPFL 125 Dec 23, 2022
PyElecCL - Electron Monte Carlo Second Checks

PyElecCL Python program to perform second checks for electron Monte Carlo radiat

Reese Haywood 3 Feb 22, 2022
A Number Recognition algorithm

Paddle-VisualAttention Results_Compared SVHN Dataset Methods Steps GPU Batch Size Learning Rate Patience Decay Step Decay Rate Training Speed (FPS) Ac

1 Nov 12, 2021
Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

STARS Laboratory 8 Sep 14, 2022
bio_inspired_min_nets_improve_the_performance_and_robustness_of_deep_networks

Code Submission for: Bio-inspired Min-Nets Improve the Performance and Robustness of Deep Networks Run with docker To build a docker environment, chan

0 Dec 09, 2021
Source code of our work: "Benchmarking Deep Models for Salient Object Detection"

SALOD Source code of our work: "Benchmarking Deep Models for Salient Object Detection". In this works, we propose a new benchmark for SALient Object D

22 Dec 30, 2022
Generative Query Network (GQN) in PyTorch as described in "Neural Scene Representation and Rendering"

Update 2019/06/24: A model trained on 10% of the Shepard-Metzler dataset has been added, the following notebook explains the main features of this mod

Jesper Wohlert 313 Dec 27, 2022
A Comprehensive Study on Learning-Based PE Malware Family Classification Methods

A Comprehensive Study on Learning-Based PE Malware Family Classification Methods Datasets Because of copyright issues, both the MalwareBazaar dataset

8 Oct 21, 2022
Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.

Smaller Multilingual Transformers This repository shares smaller versions of multilingual transformers that keep the same representations offered by t

Geotrend 79 Dec 28, 2022
TransNet V2: Shot Boundary Detection Neural Network

TransNet V2: Shot Boundary Detection Neural Network This repository contains code for TransNet V2: An effective deep network architecture for fast sho

Tomáš Souček 212 Dec 27, 2022
Referring Video Object Segmentation

Awesome-Referring-Video-Object-Segmentation Welcome to starts ⭐ & comments 💹 & sharing 😀 !! - 2021.12.12: Recent papers (from 2021) - welcome to ad

Explorer 57 Dec 11, 2022
Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]

Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps Here is the code for ssbassline model. We also provide OCR results/features/mode

ZephyrZhuQi 51 Nov 18, 2022
Multiple Object Tracking with Yolov5!

Tracking with yolov5 This implementation is for who need to tracking multi-object only with detector. You can easily track mult-object with your well

9 Nov 08, 2022
Learnable Boundary Guided Adversarial Training (ICCV2021)

Learnable Boundary Guided Adversarial Training This repository contains the implementation code for the ICCV2021 paper: Learnable Boundary Guided Adve

DV Lab 27 Sep 25, 2022
CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation Framework

CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation Framework This repository contains a framework for Recommender Systems (RecSys), a

RecSys Lab 8 Jul 03, 2022