Can we visualize a large scientific data set with a surrogate model? We're building a GAN for the Earth's Mantle Convection data set to see if we can!

Overview

EarthGAN - Earth Mantle Surrogate Modeling

Can a surrogate model of the Earthโ€™s Mantle Convection data set be built such that it can be readily run in a web-browser and produce high-fidelity results? We're trying to do just that through the use of a generative adversarial network -- we call ours EarthGAN. We are in active research.

See how EarthGAN currently works! Open up the Colab notebook and create results from the preliminary generator: Open In Colab

compare_epoch41_rindex165_moll

Progress updates, along with my thoughts, can be found in the devlog. The preliminary results were presented at VIS 2021 as part of the SciVis contest. See the paper on arXiv, here.

This is active research. If you have any thoughts, suggestions, or would like to collaborate, please reach out! You can also post questions/ideas in the discussions section.

Source code arXiv

Current Approach

We're leveraging the excellent work of Li et al. who have implemented a GAN for creating super-resolution cosmological simulations. The general method is in their map2map repository. We've used their GAN implementation as it works on 3D data. Please cite their work if you find it useful!

The current approach is based on the StyleGAN2 model. In addition, a conditional-GAN (cGAN) is used to produce results that are partially deterministic.

Setup

Works best if you are in a HPC environment (I used Compute Canada). Also tested locally in linux (MacOS should also work). If you run windows you'll have to do much of the environment setup and data download/preprocessing manually.

To reproduce data pipeline and begin training: *

  1. Clone this repo - clone https://github.com/tvhahn/EarthGAN.git

  2. Create virtual environment. Assumes that Conda is installed when on a local computer.

    • HPC: make create_environment will detect HPC environment and automatically create environment from make_hpc_venv.sh. Tested on Compute Canada. Modify make_hpc_venv.sh for your own HPC cluster.

    • Linux/MacOS: use command from Makefile - `make create_environment

  3. Download raw data.

    • HPC: use make download. Will automatically detect HPC environment.

    • Linux/MacOS: use make download. Will automatically download to appropriate data/raw directory.

  4. Extract raw data.

    • HPC: use make download. Will automatically detect HPC environment. Again, modify for your HPC cluster.
    • Linux/MacOS: use make extract. Will automatically extract to appropriate data/raw directory.
  5. Ensure virtual environment is activated. conda activate earth

  6. From root directory of EarthGAN, run pip install -e . -- this will give the python scripts access to the src folders.

  7. Create the processed data that will be used for training.

    • HPC: use make data. Will automatically detect HPC environment and create the processed data.

      ๐Ÿ“ Note: You will have to modify the make_hpc_data.sh in the ./bash_scripts/ folder to match the requirements of your HPC environment

    • Linux/MacOS: use make data.

  8. Copy the processed data to the scratch folder if you're on the HPC. Modify copy_processed_data_to_scratch.sh in ./bash_scripts/ folder.

  9. Train!

    • HPC: use make train. Again, modify for your HPC cluster. Not yet optimized for multi-GPU training, so be warned, it will be SLOW!

    • Linux/MacOS: use make train.

* Let me know if you run into any problems! This is still in development.

Project Organization

โ”œโ”€โ”€ Makefile           <- Makefile with commands like `make data` or `make train`
โ”‚
โ”œโ”€โ”€ bash_scripts	   <- Bash scripts used in for training models or setting up environment
โ”‚   โ”œโ”€โ”€ train_model_hpc.sh       <- Bash/SLURM script used to train models on HPC (you will need to	modify this to work on your HPC). Called with `make train`
โ”‚   โ””โ”€โ”€ train_model_local.sh     <- Bash script used to train models locally. Called on with `make train`
โ”‚
โ”œโ”€โ”€ data
โ”‚   โ”œโ”€โ”€ interim        <- Intermediate data before we've applied any scaling.
โ”‚   โ”œโ”€โ”€ processed      <- The final, canonical data sets for modeling.
โ”‚   โ””โ”€โ”€ raw            <- Original data from Earth Mantle Convection simulation.
โ”‚
โ”œโ”€โ”€ models             <- Trained and serialized models, model predictions, or model summaries
โ”‚   โ””โ”€โ”€ interim        <- Interim models and summaries
โ”‚   โ””โ”€โ”€ final          <- Final, cononical models
โ”‚
โ”œโ”€โ”€ notebooks          <- Jupyter notebooks. Generally used for explaining various components
โ”‚   โ”‚                     of the code base.
โ”‚   โ””โ”€โ”€ scratch        <- Rough-draft notebooks, of questionable quality. Be warned!
โ”‚
โ”œโ”€โ”€ references         <- Data dictionaries, manuals, and all other explanatory materials.
โ”‚
โ”œโ”€โ”€ reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
โ”‚   โ””โ”€โ”€ figures        <- Generated graphics and figures to be used in reporting
โ”‚
โ”œโ”€โ”€ requirements.txt   <- Recommend using `make create_environment`. However, can use this file
โ”‚                         for to recreate environment with pip
โ”œโ”€โ”€ envearth.yml       <- Used to create conda environment. Use `make create_environment` when
โ”‚                         on local compute				
โ”‚
โ”œโ”€โ”€ setup.py           <- makes project pip installable (pip install -e .) so src can be imported
โ”œโ”€โ”€ src                <- Source code for use in this project.
โ”‚   โ”œโ”€โ”€ __init__.py    <- Makes src a Python module
โ”‚   โ”‚
โ”‚   โ”œโ”€โ”€ data           <- Scripts to download or generate data
โ”‚   โ”‚   โ”œโ”€โ”€ make_dataset.py			<- Script for making downsampled data from the original
โ”‚   โ”‚   โ”œโ”€โ”€ data_prep_utils.py		<- Misc functions used in data prep
โ”‚   โ”‚   โ”œโ”€โ”€ download.sh				<- Bash script to download entire Earth Mantle data set
โ”‚   โ”‚   โ”‚  							   (used when `make data` called)
โ”‚   โ”‚   โ””โ”€โ”€download.sh				<- Bash script to extract all Earth Mantle data set files
โ”‚   โ”‚    							   from zip (used when `make extract` called)								   
โ”‚   โ”‚
โ”‚   โ”œโ”€โ”€ models         <- Scripts to train models and then use trained models to make
โ”‚   โ”‚   โ”‚                 predictions
โ”‚   โ”‚   โ”‚
โ”‚   โ”‚   โ””โ”€โ”€ train_model.py
โ”‚   โ”‚
โ”‚   โ””โ”€โ”€ visualization  <- Scripts to create exploratory and results oriented visualizations
โ”‚       โ””โ”€โ”€ visualize.py
โ”‚
โ”œโ”€โ”€ LICENSE
โ””โ”€โ”€ README.md          <- README describing project.
You might also like...
An implementation of the [Hierarchical (Sig-Wasserstein) GAN] algorithm for large dimensional Time Series Generation
An implementation of the [Hierarchical (Sig-Wasserstein) GAN] algorithm for large dimensional Time Series Generation

Hierarchical GAN for large dimensional financial market data Implementation This repository is an implementation of the [Hierarchical (Sig-Wasserstein

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.
A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.

chitra What is chitra? chitra (เคšเคฟเคคเฅเคฐ) is a multi-functional library for full-stack Deep Learning. It simplifies Model Building, API development, and M

Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space
Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space

extrinsic2pyramid Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space Intro A very simple and straightforward modu

Language Models Can See: Plugging Visual Controls in Text Generation
Language Models Can See: Plugging Visual Controls in Text Generation

Language Models Can See: Plugging Visual Controls in Text Generation Authors: Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lin

This is my codes that can visualize the psnr image in testing videos.
This is my codes that can visualize the psnr image in testing videos.

CVPR2018-Baseline-PSNRplot This is my codes that can visualize the psnr image in testing videos. Future Frame Prediction for Anomaly Detection โ€“ A New

A library for answering questions using data you cannot see
A library for answering questions using data you cannot see

A library for computing on data you do not own and cannot see PySyft is a Python library for secure and private Deep Learning. PySyft decouples privat

Code and data for the paper
Code and data for the paper "Hearing What You Cannot See"

Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners Public repository of the paper "Hearing What You Cannot See: Acoustic Vehicle D

Releases(v1.0.0)
  • v1.0.0(Nov 4, 2021)

Owner
Tim
Data science. Innovation. ML practitioner.
Tim
catch-22: CAnonical Time-series CHaracteristics

catch22 - CAnonical Time-series CHaracteristics About catch22 is a collection of 22 time-series features coded in C that can be run from Python, R, Ma

Carl H Lubba 229 Oct 21, 2022
Boostcamp CV Serving For Python

Boostcamp-CV-Serving Prerequisites MySQL GCP Cloud Storage GCP key file Sentry Streamlit Cloud Secrets: .streamlit/secrets.toml #DO NOT SHARE THIS I

Jungwon Seo 19 Feb 22, 2022
Official implementation of the ICML2021 paper "Elastic Graph Neural Networks"

ElasticGNN This repository includes the official implementation of ElasticGNN in the paper "Elastic Graph Neural Networks" [ICML 2021]. Xiaorui Liu, W

liuxiaorui 34 Dec 04, 2022
Using Machine Learning to Create High-Res Fineย Art

BIG.art: Using Machine Learning to Create High-Res Fine Art How to use GLIDE and BSRGAN to create ultra-high-resolution paintings with fine details By

Robert A. Gonsalves 13 Nov 27, 2022
A curated list and survey of awesome Vision Transformers.

English | ็ฎ€ไฝ“ไธญๆ–‡ A curated list and survey of awesome Vision Transformers. You can use mind mapping software to open the mind mapping source file. You c

OpenMMLab 281 Dec 21, 2022
CSAC - Collaborative Semantic Aggregation and Calibration for Separated Domain Generalization

CSAC Introduction This repository contains the implementation code for paper: Co

ScottYuan 5 Jul 22, 2022
Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch.

Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch! Now, Rearrange and Reduce in einops.layers.jittor are support!!

130 Jan 08, 2023
๊ณต๊ณต์žฅ์†Œ์—์„œ ๋ˆˆ๋งŒ ๋Œ๋ฆฌ๋ฉด CCTV๊ฐ€ ๋ณด์ธ๋‹ค๋Š” ๋ง์ด ๊ณผ์–ธ์ด ์•„๋‹ ์ •๋„๋กœ CCTV๊ฐ€ ์šฐ๋ฆฌ ์ƒํ™œ์— ๊นŠ์ˆ™์ด ์ž๋ฆฌ ์žก์•˜์Šต๋‹ˆ๋‹ค.

ObsCare_Main ์†Œ๊ฐœ ๊ณต๊ณต์žฅ์†Œ์—์„œ ๋ˆˆ๋งŒ ๋Œ๋ฆฌ๋ฉด CCTV๊ฐ€ ๋ณด์ธ๋‹ค๋Š” ๋ง์ด ๊ณผ์–ธ์ด ์•„๋‹ ์ •๋„๋กœ CCTV๊ฐ€ ์šฐ๋ฆฌ ์ƒํ™œ์— ๊นŠ์ˆ™์ด ์ž๋ฆฌ ์žก์•˜์Šต๋‹ˆ๋‹ค. CCTV์˜ ๋Œ€์ˆ˜๊ฐ€ ๊ธ‰๊ฒฉํžˆ ๋Š˜์–ด๋‚˜๋ฉด์„œ ๊ด€๋ฆฌ์™€ ํšจ์œจ์„ฑ ๋ฌธ์ œ์™€ ๋”๋ถˆ์–ด, ๊ณณ๊ณณ์— ์„ค์น˜๋œ CCTV๋ฅผ ๊ฐœ๋ณ„ ๊ด€์ œํ•˜๋Š” ๊ฒƒ์œผ๋กœ๋Š” ์‘๊ธ‰ ์ƒ

5 Jul 07, 2022
ATAC: Adversarially Trained Actor Critic

ATAC: Adversarially Trained Actor Critic Adversarially Trained Actor Critic for Offline Reinforcement Learning by Ching-An Cheng*, Tengyang Xie*, Nan

Microsoft 41 Dec 08, 2022
Optimal space decomposition based-product quantization for approximate nearest neighbor search

Optimal space decomposition based-product quantization for approximate nearest neighbor search Abstract Product quantization(PQ) is an effective neare

Mylove 1 Nov 19, 2021
This repository provides a PyTorch implementation and model weights for HCSC (Hierarchical Contrastive Selective Coding)

HCSC: Hierarchical Contrastive Selective Coding This repository provides a PyTorch implementation and model weights for HCSC (Hierarchical Contrastive

YUANFAN GUO 111 Dec 20, 2022
Unofficial implementation of "Coordinate Attention for Efficient Mobile Network Design"

Unofficial implementation of "Coordinate Attention for Efficient Mobile Network Design". CoordAttention tensorflow slim

Billy 9 Aug 22, 2022
Code for "3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop"

PyMAF This repository contains the code for the following paper: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop Hongwe

Hongwen Zhang 450 Dec 28, 2022
Least Square Calibration for Peer Reviews

Least Square Calibration for Peer Reviews Requirements gurobipy - for solving convex programs GPy - for Bayesian baseline numpy pandas To generate p

Sigma <a href=[email protected]"> 1 Nov 01, 2021
Pyramid Pooling Transformer for Scene Understanding

Pyramid Pooling Transformer for Scene Understanding Requirements: torch 1.6+ torchvision 0.7.0 timm==0.3.2 Validated on torch 1.6.0, torchvision 0.7.0

Yu-Huan Wu 119 Dec 29, 2022
Ground truth data for the Optical Character Recognition of Historical Classical Commentaries.

OCR Ground Truth for Historical Commentaries The dataset OCR ground truth for historical commentaries (GT4HistComment) was created from the public dom

Ajax Multi-Commentary 3 Sep 08, 2022
Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

DynaBOA Code repositoty for the paper: Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation Shanyan Guan, Jingwei Xu, Michell

198 Dec 29, 2022
Accuracy Aligned. Concise Implementation of Swin Transformer

Accuracy Aligned. Concise Implementation of Swin Transformer This repository contains the implementation of Swin Transformer, and the training codes o

FengWang 77 Dec 16, 2022
Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)"

BAM and CBAM Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)" Updat

Jongchan Park 1.7k Jan 01, 2023
Spatial Temporal Graph Convolutional Networks (ST-GCN) for Skeleton-Based Action Recognition in PyTorch

Reminder ST-GCN has transferred to MMSkeleton, and keep on developing as an flexible open source toolbox for skeleton-based human understanding. You a

sijie yan 1.1k Dec 25, 2022