Official PyTorch implementation of "Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics".

Overview

Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics

This repository is the official PyTorch implementation of "Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics".

Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics

Sungyong Seo*, Chuizheng Meng*, Yan Liu, Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics, ICLR 2020.

Data

Download the requried data.zip from Google Drive. Then,

cd /path/to/the/root/of/project
mkdir data
mv /path/to/data.zip ./data/
cd data
unzip data.zip

Environment

Docker (Recommended!)

First follow the official documents of Docker and nvidia-docker to install docker with CUDA support.

Use the following commands to build a docker image containing all necessary packages:

cd docker
bash build_docker.sh

This script will also copy the jupyter_notebook_config.py, which is the configuration file of Jupyter Notebook, into the docker image. The default password for Jupyter Notebook is 12345.

Use the following script to create a container from the built image:

bash rundocker-melady.sh

If the project directory is not under your home directory, modify rundocker-melady.sh to change the file mapping.

Manual Installation

# install python packages
pip install pyyaml tensorboardX geopy networkx tqdm
conda install pytorch==1.1.0 torchvision==0.2.2 cudatoolkit=9.0 -c pytorch
conda install -y matplotlib scipy pandas jupyter scikit-learn geopandas
conda install -y -c conda-forge jupyterlab igl meshplot

# install pytorch_geometric
export PATH=/usr/local/cuda/bin:$PATH
export CPATH=/usr/local/cuda/include:$CPATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
pip install --verbose --no-cache-dir torch-scatter==1.2.0
pip install --verbose --no-cache-dir torch-sparse==0.4.0
pip install --verbose --no-cache-dir torch-cluster==1.3.0
pip install --verbose --no-cache-dir torch-spline-conv==1.1.0
pip install torch-geometric==1.1.2

# specify numpy==1.16.2 to avoid loading error (>=1.16.3 may require allow_pickle=True in np.load)
pip install -I numpy==1.16.2 

Run

Experiments in Section 3.1 "Approximation of Directional Derivatives"

See the Jupyter Notebook approx-gradient/synthetic-gradient-approximation.ipynb for details.

Experiments in Section 3.2 "Graph Signal Prediction" and Section 4 "Prediction: Graph Signals on Land-based Weather Stations"

cd scripts
python train.py --extconf /path/to/exp/config/file --mode train --device cuda:0

Examples:

  • PA-DGN, Graph Signal Prediction of Synthetic Data
cd scripts
python train.py --extconf ../confs/iclrexps/irregular_varicoef_diff_conv_eqn_4nn_42_250sample/GraphPDE_GN_sum_notshared_4nn/conf.yaml --mode train --device cuda:0
  • PA-DGN, Prediction of Graph Signals on Land-based Weather Stations
cd scripts
python train.py --extconf ../confs/iclrexps/noaa_pt_states_withloc/GraphPDE_GN_RGN_16_notshared_4nn/conf.yaml --mode train --device cuda:0
  • PA-DGN, Sea Surface Temperature (SST) Prediction
cd scripts
python train.py --extconf ../confs/iclrexps/sst-daily_4nn_42_250sample/GraphPDE_GN_sum_notshared_4nn/conf.yaml --mode train --device cuda:0

Summary of Results

You can use results/print_results.ipynb to print tables of experiment results, including the mean value and the standard error of mean absolution error (MAE) of prediction tasks.

Reference

@inproceedings{seo*2020physicsaware,
title={Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics},
author={Sungyong Seo* and Chuizheng Meng* and Yan Liu},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=r1gelyrtwH}
}
Owner
USC-Melady
USC-Melady
Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks

Uniformer - Pytorch Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification ta

Phil Wang 90 Nov 24, 2022
Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019)

Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019) Introduction Official implementation of Dynamic Multi-scale Filters for Semant

23 Oct 21, 2022
MQBench Quantization Aware Training with PyTorch

MQBench Quantization Aware Training with PyTorch I am using MQBench(Model Quantization Benchmark)(http://mqbench.tech/) to quantize the model for depl

Ling Zhang 29 Nov 18, 2022
This is the dataset and code release of the OpenRooms Dataset.

This is the dataset and code release of the OpenRooms Dataset.

Visual Intelligence Lab of UCSD 95 Jan 08, 2023
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

Wenhao Wu 46 Dec 27, 2022
PyTorch Implementation of the paper Learning to Reweight Examples for Robust Deep Learning

Learning to Reweight Examples for Robust Deep Learning Unofficial PyTorch implementation of Learning to Reweight Examples for Robust Deep Learning. Th

Daniel Stanley Tan 325 Dec 28, 2022
Deep Distributed Control of Port-Hamiltonian Systems

De(e)pendable Distributed Control of Port-Hamiltonian Systems (DeepDisCoPH) This repository is associated to the paper [1] and it contains: The full p

Dependable Control and Decision group - EPFL 3 Aug 17, 2022
Text-Based Ideal Points

Text-Based Ideal Points Source code for the paper: Text-Based Ideal Points by Keyon Vafa, Suresh Naidu, and David Blei (ACL 2020). Update (June 29, 20

Keyon Vafa 37 Oct 09, 2022
Fast and Simple Neural Vocoder, the Multiband RNNMS

Multiband RNN_MS Fast and Simple vocoder, Multiband RNN_MS. Demo Quick training How to Use System Details Results References Demo ToDO: Link super gre

tarepan 5 Jan 11, 2022
Pytorch Lightning code guideline for conferences

Deep learning project seed Use this seed to start new deep learning / ML projects. Built in setup.py Built in requirements Examples with MNIST Badges

Pytorch Lightning 1k Jan 02, 2023
[ICCV2021] Official Pytorch implementation for SDGZSL (Semantics Disentangling for Generalized Zero-Shot Learning)

Semantics Disentangling for Generalized Zero-shot Learning This is the official implementation for paper Zhi Chen, Yadan Luo, Ruihong Qiu, Zi Huang, J

25 Dec 06, 2022
Real-Time Multi-Contact Model Predictive Control via ADMM

Here, you can find the code for the paper 'Real-Time Multi-Contact Model Predictive Control via ADMM'. Code is currently being cleared up and optimize

17 Dec 28, 2022
How to Learn a Domain Adaptive Event Simulator? ACM MM, 2021

LETGAN How to Learn a Domain Adaptive Event Simulator? ACM MM 2021 Running Environment: pytorch=1.4, 1 NVIDIA-1080TI. More details can be found in pap

CVTEAM 4 Sep 20, 2022
Lepard: Learning Partial point cloud matching in Rigid and Deformable scenes

Lepard: Learning Partial point cloud matching in Rigid and Deformable scenes [Paper] Method overview 4DMatch Benchmark 4DMatch is a benchmark for matc

103 Jan 06, 2023
Animation of solving the traveling salesman problem to optimality using mixed-integer programming and iteratively eliminating sub tours

tsp-streamlit Animation of solving the traveling salesman problem to optimality using mixed-integer programming and iteratively eliminating sub tours.

4 Nov 05, 2022
PyTorch implementation of federated learning framework based on the acceleration of global momentum

Federated Learning with Acceleration of Global Momentum PyTorch implementation of federated learning framework based on the acceleration of global mom

0 Dec 23, 2021
TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction

TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction TSDF++ is a novel multi-object TSDF formulation that can encode mult

ETHZ ASL 130 Dec 29, 2022
Trainable Bilateral Filter Layer (PyTorch)

Trainable Bilateral Filter Layer (PyTorch) This repository contains our GPU-accelerated trainable bilateral filter layer (three spatial and one range

FabianWagner 26 Dec 25, 2022
Bridging Vision and Language Model

BriVL BriVL (Bridging Vision and Language Model) 是首个中文通用图文多模态大规模预训练模型。BriVL模型在图文检索任务上有着优异的效果,超过了同期其他常见的多模态预训练模型(例如UNITER、CLIP)。 BriVL论文:WenLan: Bridgi

235 Dec 27, 2022