A benchmark dataset for emulating atmospheric radiative transfer in weather and climate models with machine learning (NeurIPS 2021 Datasets and Benchmarks Track)

Overview

ClimART - A Benchmark Dataset for Emulating Atmospheric Radiative Transfer in Weather and Climate Models

Python PyTorch CC BY 4.0

Official PyTorch Implementation

Using deep learning to optimise radiative transfer calculations.

Preliminary paper to appear at NeurIPS 2021 Datasets Track: https://openreview.net/forum?id=FZBtIpEAb5J

Abstract: Numerical simulations of Earth's weather and climate require substantial amounts of computation. This has led to a growing interest in replacing subroutines that explicitly compute physical processes with approximate machine learning (ML) methods that are fast at inference time. Within weather and climate models, atmospheric radiative transfer (RT) calculations are especially expensive. This has made them a popular target for neural network-based emulators. However, prior work is hard to compare due to the lack of a comprehensive dataset and standardized best practices for ML benchmarking. To fill this gap, we build a large dataset, ClimART, with more than 10 million samples from present, pre-industrial, and future climate conditions, based on the Canadian Earth System Model. ClimART poses several methodological challenges for the ML community, such as multiple out-of-distribution test sets, underlying domain physics, and a trade-off between accuracy and inference speed. We also present several novel baselines that indicate shortcomings of datasets and network architectures used in prior work.

Contact: Venkatesh Ramesh (venka97 at gmail) or Salva Rühling Cachay (salvaruehling at gmail).

Overview:

  • climart/: Package with the main code, baselines and ML training logic.
  • notebooks/: Notebooks for visualization of data.
  • analysis/: Scripts to create visualization of the results (requires logging).
  • scripts/: Scripts to train and evaluate models, and to download the whole ClimART dataset.

Getting Started

Requirements

  • Linux and Windows are supported, but we recommend Linux for performance and compatibility reasons.
  • NVIDIA GPUs with at least 8 GB of memory and system with 12 GB RAM (More RAM is required if training with --load_train_into_mem option which allows for faster training). We have done all testing and development using NVIDIA V100 GPUs.
  • 64-bit Python >=3.7 and PyTorch >=1.8.1. See https://pytorch.org/ for PyTorch install instructions.
  • Python libraries mentioned in ``env.yml`` file, see Getting Started (Need to have miniconda/conda installed).

Downloading the ClimART Dataset

By default, only a subset of CLimART is downloaded. To download the train/val/test years you want, please change the loop in ``data_download.sh.`` appropriately. To download the whole ClimART dataset, you can simply run

bash scripts/download_climart_full.sh 

conda env create -f env.yml   # create new environment will all dependencies
conda activate climart  # activate the environment called 'climart'
bash data_download.sh  # download the dataset (or a subset of it, see above)
# For one of {CNN, GraphNet, GCN, MLP}, run the model with its lowercase name with the following commmand:
bash scripts/train_<model-name>.sh

Dataset Structure

To avoid storage redundancy, we store one single input array for both pristine- and clear-sky conditions. The dimensions of ClimART’s input arrays are:

  • layers: (N, 49, D-lay)
  • levels: (N, 50, 4)
  • globals: (N, 82)

where N is the data dimension (i.e. the number of examples of a specific year, or, during training, of a batch), 49 and 50 are the number of layers and levels in a column respectively. Dlay, 4, 82 is the number of features/channels for layers, levels, globals respectively.

For pristine-sky Dlay = 14, while for clear-sky Dlay = 45, since it contains extra aerosol related variables. The array for pristine-sky conditions can be easily accessed by slicing the first 14 features out of the stored array, e.g.: pristine_array = layers_array[:, :, : 14]

The complete list of variables in the dataset is as follows:

Variables List

Training Options

--exp_type: "pristine" or "clear_sky" for training on the respective atmospheric conditions.
--target_type: "longwave" (thermal) or "shortwave" (solar) for training on the respective radiation type targets.
--target_variable: "Fluxes" or "Heating-rate" for training on profiles of fluxes or heating rates.
--model: ML model architecture to select for training (MLP, GCN, GN, CNN)
--workers: The number of workers to use for dataloading/multi-processing.
--device: "cuda" or "cpu" to use GPUs or not.
--load_train_into_mem: Whether to load the training data into memory (can speed up training)
--load_val_into_mem: Whether to load the validation data into memory (can speed up training)
--lr: The learning rate to use for training.
--epochs: Number of epochs to train the model for.
--optim: The choice of optimizer to use (e.g. Adam)
--scheduler: The learning rate scheduler used for training (expdecay, reducelronplateau, steplr, cosine).
--weight_decay: Weight decay to use for the optimization process.
--batch_size: Batch size for training.
--act: Activation function (e.g. ReLU, GeLU, ...).
--hidden_dims: The hidden dimensionalities to use for the model (e.g. 128 128).
--dropout: Dropout rate to use for parameters.
--loss: Loss function to train the model with (MSE recommended).
--in_normalize: Select how to normalize the data (Z, min_max, None). Z-scaling is recommended.
--net_norm: Normalization scheme to use in the model (batch_norm, layer_norm, instance_norm)
--gradient_clipping: If "norm", the L2-norm of the parameters is clipped the value of --clip. Otherwise no clipping.
--clip: Value to clip the gradient to while training.
--val_metric: Which metric to use for saving the 'best' model based on validation set. Default: "RMSE"
--gap: Use global average pooling in-place of MLP to get output (CNN only).
--learn_edge_structure: If --model=='GCN': Whether to use a L-GCN (if set) with learnable adjacency matrix, or a GCN.
--train_years: The years to select for training the data. (Either individual years 1997+1991 or range 1991-1996)
--validation_years: The years to select for validating the data. Recommended: "2005" or "2005-06" 
--test_ood_1991: Whether to load and test on OOD data from 1991 (Mt. Pinatubo; especially challenging for clear-sky conditions)
--test_ood_historic: Whether to load and test on historic/pre-industrial OOD data from 1850-52.
--test_ood_future: Whether to load and test on future OOD data from 2097-99 (under a changing climate/radiative forcing)
--wandb_model: If "online", Weights&Biases logging. If "disabled" no logging.
--expID: A unique ID for the experiment if using logging.

Reproducing our Baselines

To reproduce our paper results (for seed = 7) you may run the following commands in a shell.

CNN

python main.py --model "CNN" --exp_type "pristine" --target_type "shortwave" --workers 6 --seed 7 \
  --batch_size 128 --lr 2e-4 --optim Adam --weight_decay 1e-6 --scheduler "expdecay" \
  --in_normalize "Z" --net_norm "none" --dropout 0.0 --act "GELU" --epochs 100 \
  --gap --gradient_clipping "norm" --clip 1.0 \
  --train_years "1990+1999+2003" --validation_years "2005" \
  --wandb_mode disabled

MLP

python main.py --model "MLP" --exp_type "pristine" --target_type "shortwave" --workers 6 --seed 7 \
  --batch_size 128 --lr 2e-4 --optim Adam --weight_decay 1e-6 --scheduler "expdecay" \
  --in_normalize "Z" --net_norm "layer_norm" --dropout 0.0 --act "GELU" --epochs 100 \
  --gradient_clipping "norm" --clip 1.0 --hidden_dims 512 256 256 \
  --train_years "1990+1999+2003" --validation_years "2005" \
  --wandb_mode disabled

GCN

python main.py --model "GCN+Readout" --exp_type "pristine" --target_type "shortwave" --workers 6 --seed 7 \
  --batch_size 128 --lr 2e-4 --optim Adam --weight_decay 1e-6 --scheduler "expdecay" \
  --in_normalize "Z" --net_norm "layer_norm" --dropout 0.0 --act "GELU" --epochs 100 \
  --preprocessing "mlp_projection" --projector_net_normalization "layer_norm" --graph_pooling "mean"\
  --residual --improved_self_loops \
  --gradient_clipping "norm" --clip 1.0 --hidden_dims 128 128 128 \  
  --train_years "1990+1999+2003" --validation_years "2005" \
  --wandb_mode disabled

Logging

Currently, logging is disabled by default. However, the user may use wandb to log the experiments by passing the argument --wandb_mode=online

Notebooks

There are some jupyter notebooks in the notebooks folder which we used for plotting, benchmarking etc. You may go through them to visualize the results/benchmark the models.

License:

This work is made available under Attribution 4.0 International (CC BY 4.0) license. CC BY 4.0

Development

This repository is currently under active development and you may encounter bugs with some functionality. Any feedback, extensions & suggestions are welcome!

Citation

If you find ClimART or this repository helpful, feel free to cite our publication:

@inproceedings{cachay2021climart,
    title={{ClimART}: A Benchmark Dataset for Emulating Atmospheric Radiative Transfer in Weather and Climate Models},
    author={Salva R{\"u}hling Cachay and Venkatesh Ramesh and Jason N. S. Cole and Howard Barker and David Rolnick},
    booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
    year={2021},
    url={https://openreview.net/forum?id=FZBtIpEAb5J}
}
GT China coal model

GT China coal model The full version of a China coal transport model with a very high spatial reslution. What it does The code works in a few steps: T

0 Dec 13, 2021
Gym Threat Defense

Gym Threat Defense The Threat Defense environment is an OpenAI Gym implementation of the environment defined as the toy example in Optimal Defense Pol

Hampus Ramström 5 Dec 08, 2022
Resco: A simple python package that report the effect of deep residual learning

resco Description resco is a simple python package that report the effect of dee

Pierre-Arthur Claudé 1 Jun 28, 2022
The `rtdl` library + The official implementation of the paper

The `rtdl` library + The official implementation of the paper "Revisiting Deep Learning Models for Tabular Data"

Yandex Research 510 Dec 30, 2022
Proof of concept GnuCash Webinterface

Proof of Concept GnuCash Webinterface This may one day be a something truly great. Milestones [ ] Browse accounts and view transactions [ ] Record sim

Josh 14 Dec 28, 2022
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

77 Dec 16, 2022
Fully Convolutional DenseNet (A.K.A 100 layer tiramisu) for semantic segmentation of images implemented in TensorFlow.

FC-DenseNet-Tensorflow This is a re-implementation of the 100 layer tiramisu, technically a fully convolutional DenseNet, in TensorFlow (Tiramisu). Th

Hasnain Raza 121 Oct 12, 2022
Building blocks for uncertainty-aware cycle consistency presented at NeurIPS'21.

UncertaintyAwareCycleConsistency This repository provides the building blocks and the API for the work presented in the NeurIPS'21 paper Robustness vi

EML Tübingen 19 Dec 12, 2022
Drone detection using YOLOv5

This drone detection system uses YOLOv5 which is a family of object detection architectures and we have trained the model on Drone Dataset. Overview I

Tushar Sarkar 27 Dec 20, 2022
A diff tool for language models

LMdiff Qualitative comparison of large language models. Demo & Paper: http://lmdiff.net LMdiff is a MIT-IBM Watson AI Lab collaboration between: Hendr

Hendrik Strobelt 27 Dec 29, 2022
Easy to use Audio Tagging in PyTorch

Audio Classification, Tagging & Sound Event Detection in PyTorch Progress: Fine-tune on audio classification Fine-tune on audio tagging Fine-tune on s

sithu3 15 Dec 22, 2022
Keras implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Erik Linder-Norén 8.9k Jan 04, 2023
AirCode: A Robust Object Encoding Method

AirCode This repo contains source codes for the arXiv preprint "AirCode: A Robust Object Encoding Method" Demo Object matching comparison when the obj

Chen Wang 30 Dec 09, 2022
StyleSwin: Transformer-based GAN for High-resolution Image Generation

StyleSwin This repo is the official implementation of "StyleSwin: Transformer-based GAN for High-resolution Image Generation". By Bowen Zhang, Shuyang

Microsoft 349 Dec 28, 2022
Train the HRNet model on ImageNet

High-resolution networks (HRNets) for Image classification News [2021/01/20] Add some stronger ImageNet pretrained models, e.g., the HRNet_W48_C_ssld_

HRNet 866 Jan 04, 2023
Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System

Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System The possibilities to involve

Babu Kumaran Nalini 0 Nov 19, 2021
Self-attentive task GAN for space domain awareness data augmentation.

SATGAN TODO: update the article URL once published. Article about this implemention The self-attentive task generative adversarial network (SATGAN) le

Nathan 2 Mar 24, 2022
How to Predict Stock Prices Easily Demo

How-to-Predict-Stock-Prices-Easily-Demo How to Predict Stock Prices Easily - Intro to Deep Learning #7 by Siraj Raval on Youtube ##Overview This is th

Siraj Raval 752 Nov 16, 2022
A Python 3 package for state-of-the-art statistical dimension reduction methods

direpack: a Python 3 library for state-of-the-art statistical dimension reduction techniques This package delivers a scikit-learn compatible Python 3

Sven Serneels 32 Dec 14, 2022
Rethinking Portrait Matting with Privacy Preserving

Rethinking Portrait Matting with Privacy Preserving This is the official repository of the paper Rethinking Portrait Matting with Privacy Preserving.

184 Jan 03, 2023