The Instructed Glacier Model (IGM)

Related tags

Deep Learningigm
Overview

The Instructed Glacier Model (IGM)

Overview

The Instructed Glacier Model (IGM) simulates the ice dynamics, surface mass balance, and its coupling through mass conservation to predict the evolution of glaciers, icefields, or ice sheets (Figs. 1 and 2).

The specificity of IGM is that it models the ice flow by a Convolutional Neural Network (CNN), which is trained with state-of-the-art ice flow models (Fig. 3). By doing so, the most computationally demanding model component is substituted by a cheap emulator, permitting speed-up of several orders of magnitude at the cost of a minor loss in accuracy.

Alt text

IGM consists of an open-source Python code, which runs across both CPU and GPU and deals with two-dimensional gridded input and output data. Together with a companion library of ice flow emulators, IGM permits user-friendly, highly efficient, and mechanically state-of-the-art glacier simulations.

Installing Python Packages

IGM is written in Python and requires the installation of libraries such as numpy, matplotlib, netCDF4, tensorflow (version 2.4.0 or later), and keras libraries. I recommend creating a dedicated Python environment 'igm' typing the following commands (here we use conda):

conda create --name igm python=3.7
conda activate igm
conda install matplotlib numpy netCDF4 
pip install tensorflow==2.4.0   
pip install keras
pip install tensorflow-addons
pip install -U protobuf  # this is necessary for TF 2.5.0

Optional: For the best performance, I recommend running IGM on GPU. For that purpose, you need to additionally install i) cuda ii) cudnn iii) tensorflow-gpu. Make sure that i) cuda ii) cudnn iii) tensorflow iv) python versions are compatible, and your Nvidia driver is compatible with the version of cuda. Such incompatibility is the most common source of issue. Here, an example of installation:

conda install cudatoolkit=11.0 cudnn=8.0 -c conda-forge 
pip install tensorflow-gpu==2.4.0    

Quick start with examples

Once the above packages have been installed, you may already run ready-to-use examples in the folder examples/, which contains input data and scripts with necessary commands. To date, it contains two examples:

  • aletsch-simple provides a simple set-up for an advance-retreat simulation of the largest glacier of the European Alps -- Aletsch Glacier, Switzerland -- using a simple parametrization of the mass balance based on time-varying Equilibrium Line Altitudes (ELA), as well as an example of a fully-custumized mass balance routine implementing an oscilitating ELA.

  • cluster-simple is simlar to aletsch-simple, but over a wider domain including a tens of glaciers to demonstrate the capability of IGM to model a glacier network.

About the code

The IGM code is packed into a single file src/igm.py, which defines the class igm and contains all what we need -- variables and functions -- to run a time evolution glacier model. Just explore it.

IGM core code implements a simple mass balance parametrization based on equilibrium line altitude, accumulation and ablation, vertical gradient, and maximum accumulation rates. More elaborated mass balance models as well as climate forcing can easily advocated to IGM with user-defined functions.

The most simple usage

Assuming the necessary input files to be available, the easiest way to run IGM is to run the following comand in a Unix terminal:

export PYTHONPATH=/your/path/to/IGM/src # or cp /your/path/to/IGM/src/igm.py .
python -c "from igm import igm ; igm = igm() ; igm.run()" --tstart 0 --usegpu False

which imports the igm class, creates an element of the igm class, and runs it with desired options like --tstart 0. The list of options can be checked just adding --help. Here are the most important ones:

  --working_dir         Working directory
  --model_lib_path      Path of the trained ice flow emulator
  --geology_file        Geology file name
  --tstart              Starting time
  --tend                End time
  --tsave               Saving frequency
  --plot_result         Plot results in png when saved (alternative to NetCDF)
  --plot_live           Plot live the results during computation
  --usegpu              Use the GPU (recommended if you have one)
  --cfl                 CFL number for the transport scheme stability (must be below 1)
  --mb_simple_file      Time-varying parameters of the "simple" SMB model (like ELA)
  --init_strflowctrl    Initial value of the Strength Flow Control 

A simple usage with higher control

Alertantively, you may run python igm-run.py, where the file igm-run.py contains:

    from igm import igm 
    igm = igm() 
    igm.run()

or equivalently (this long version provides explicitly all steps of the glacier evolution model, Fig.1):

    import tensorflow as tf
    from igm import igm
    igm = igm()

    igm.initialize()  
    with tf.device(igm.device_name):
        igm.load_ncdf_data(igm.config.geology_file)
        igm.initialize_fields()               
        igm.initialize_iceflow()
        igm.update_climate()
        igm.update_smb()
        igm.update_iceflow()                    
        igm.update_ncdf_ex()
        igm.update_ncdf_ts()
        igm.print_info()
        while igm.t < igm.config.tend:                       
            igm.update_climate()
            igm.update_smb()
            igm.update_iceflow()
            igm.update_t_dt() 
            igm.update_thk()       
            igm.update_ncdf_ex()
            igm.update_ncdf_ts()
            igm.update_plot()
            igm.print_info()

By doing so, one can easily access any igm variables (see the list of variables below) on the fly, e.g., one can plot the ice thickness with the following command:

import matplotlib.pyplot as plt
plt.imshow(igm.thk,origin='lower') ; plt.colorbar()

You can also bring own modifications in the loop, e.g., the following line imposes zero mass balance above 4000 m asl.

igm.smb.assign( tf.where(igm.usurf > 4000, 0, igm.smb) )

TensorFlow

Note that IGM heavily relies on TensorFlow 2.0, and most of the relevant glaciological variables (e.g. ice thickness) are TensorFlow tensor objects, which then can be only modified using TensorFlow operations. At first sight, TensorFlow functions may look similar to Numpy, however, the operations between TensorFlow Tensors are in general not as flexible as for Numpy. For the best computational efficiency on GPU, it is crucial to keep all variables and operations within the TensorFlow framework without using numpy (to avoid unnecessary transfers between GPU and CPU memory). For quick testing and if you are unfamiliar with TensorFlow, you can always switch between TensorFlow and numpy objects as follows:

import numpy as np
thk_np = igm.thk.numpy()  # tensorflow to numpy
# here you can do numpy operations on thk_np is you wish
thk.assign( thk_np )      # numpy to tensorflow

Advanced usage

For more advanced usage with custumized model components, you may add your own routine by building a new class igm that inherits from igm to keep cores functions as follows:

from igm import igm 
class igm(igm):
    def update_my_field(self):
          self.myfield.assign( ... )

and then include 'igm.update_my_field()' in the time loop above (e.g. between 'igm.update_smb()' and 'igm.update_iceflow()'). In that case you can no longer use the shortcut 'igm.run()'. For custumized mass balance or climate update function, you may do that without modifying the main loop (i.e. keeping 'igm.run()') by defining you own routine named update_smb_mysmb and making sure to activate it by setting the parameter igm.config.type_mass_balance = 'mysmb'. For instance, an implementation of the mass balance function 'sinus' with an oscillating ELA looks like

import tensorflow as tf
import math
from igm import igm 

class igm(igm):

    def update_smb_sinus(self):
	ela = 2800 + 500*math.sin((self.t/50)*math.pi)  # define ELA
	smb = self.usurf - ela
	smb *= tf.where( smb < 0, 0.005, 0.009)  # multiply with ablat. and accum. gradients
	smb = tf.clip_by_value(smb, -100, 2.0)   # clip accumulation to 2 m/y
	self.smb.assign( smb )

igm = igm()
igm.config.type_mass_balance = 'sinus' # do not forget to select the mass balance routine in use
igm.run()

Variable names

Whenever this is possible, IGM adopts the convention name of PISM. Here is a minimal list of key variables:

Variable names Shape Description
x,y (nx) Coordinates vectors
thk (ny) Ice thickness
topg (ny,nx) Basal topography (or bedrock)
usurf (ny,nx) Surface topography
smb (ny,nx) Surface Mass Balance
ubar (ny,nx) x- depth-average velocity of ice
vbar (ny,nx) y- depth-average velocity of ice
velbar_mag (ny,nx) magnitude of (ubar,vbar)
uvelsurf (ny,nx) x- surface velocity of ice
vvelsurf (ny,nx) y- surface velocity of ice
velsurf_mag (ny,nx) magnitude of (uvelsurf,vvelsurf)

Ice flow strenght parametrization

Up to date, ice flow instructor models I used have two critical parameters, which control the ice flow: the sliding coefficient c and the rate factor in Glen's law A, which controls the ice viscosity (it depends on temperature). Always check at the README in the emulator folder.

  • Some emulators have been trained with (A,c) evolving within a 2D parameter space so that the emulator takes the two parameters in inputs. The two are called respectively 'arrhenius' and 'slidingco', and therefore must be defined prior any IGM simulations.

  • In other cases, I used several A with fixed c=0 (no sliding), and several c with fixed A=78 (a typical value for isothermal ice), and I reduced the parameters (A,c) to a single ice flow strenght control (or 'strflowctrl') defined by A + c. Raising this new parameter permits to describe regimes from nonsliding and low shearing cold ice (low A, c=0) to fast and sliding dominant temperate ice (A=78, and high c), where (A,c)=(78,0) represents a midway value corresponding to nonsliding and shearing temperate ice.

Alt text

Inputs / Outputs

Preferred and default format for I/O in IGM is NetCDF file. We now shortly describe the input and output files in turn.

  • You must provide a file (default name: geology.nc) that contains input data (i.e., initial ice surface and ice thickness) defined on a regular grid, which is kept in IGM for any computations. Note that any additional gridded variables (e.g., called myfield) passed in the input will automatically be converted as a TF Tensor, and be accessible and modifiable as igm.myfield

  • IGM records snapshot outputs at regular time intervals (frequency defined by --tsave), with a custumized serie of varibles (which can be custumized changing --vars_to_save) in an output NetCDF file (default: ex.nc). IGM also records time serie variables such as glaciate areas or ice volume.

Note that the NCO toolkit permits easy operations in command lines, e.g.

   ncks -x -v thk file.nc file.nc              # this removes the variable 'thk' from file.nc
   ncks -v usurf file.nc file.nc               # this extracts the variable usurf from file.nc
   ncap2 -h -O -s 'thk=0*thk' file.nc file.nc  # this does operations on file.nc, here force zero thk
   ncrename -v apc,strflowctrl file.nc         # this renames varible apc to strflowctrl in file.nc

Available ice flow emulators

You may find trained and reas-to-use ice flow emulators in the folder model-lib/T_M_I_Y_V/R/, where 'T_M_I_Y_V' defines the emulator, and R defines the spatial resolution. Make sure that the resolution of the picked emulator is available in the data base. Results produced with IGM will strongly rely on the chosen emulator. Make sure that you use the emulator within the hull of its training dataset (e.g., do not model an ice sheet with an emulator trained with mountain glaciers) to ensure reliability (or fidelity w.r.t to the instructor model) -- the emulator is probably much better at interpolating than at extrapolating. Information of the training dataset is provided in a dedicated README coming along the emulator.

For now, only the emulator trained by CfsFlow is available with different resolutions. If you are unhappy with the proposed list of emulators, consider training your own with the Deep Learning Emulator.

Emulating beyond the ice flow

The structure of IGM facilitates the embeding of further emulators beyond the ice flow model (e.g., mass balance, ...) assuming that it maps 2D gridded fields to 2D gridded fields similarly to the ice flow one. To do so, it suffices to take example on the ice flow one, to define your input and output fields (taking care of respecting variable's naming convention), collect the data you want to train your emulator from, train it using the Deep Learning Emulator, embed your emulator to IGM taking example on the update_iceflow method, and make sure that any extra inserted variables are well defined and/or updated in the loop. Feel free to contact me if you are planning doing so.

Data assimilation / Invert modelling

A data assimilation module of IGM to seek for optimal ice thickness, top ice surface, and ice flow parametrization, that best explain observational data while being consistent with the ice flow emulator is currently in preparation.

CPUs, GPUs, and IGM capabilities

In practise, GPUs outperform CPUs most of the time to run IGM, and I therefore advise to activate your GPU if you have one. The expected speed-up of GPUs over CPUs mostly depends on the size of the computational domain. IGM works fine on CPU for small computational domains (typically individual glaciers). In contrast, GPUs will be very advantageous to treat very large computational grids (typically large networks of glaciers) as IGM naturally takes further benefit from parrallelism.

Reference

@article{IGM,
  author       = "G. Jouvet, G. Cordonnier, B. Kim, M. Luethi, A. Vieli, A. Aschwanden",  
  title        = "Deep learning speeds up ice flow modelling by several orders of magnitude",
  journal      = "Journal of Glaciology",
  year         = 2021,
}

Acknowledgements

I greatly thank Guillaume Cordonnier for his valuable help with the TensorFlow implementation. The Parallel Ice Sheet Model has greatly inspired the naming of variables, as well as the format of input and output NetCDF files.

IGM related PhD and Master project offers

I'm currently looking for a PhD student to work with IGM at reconstructing paleo climate using invert glacier model and deep learning.

Contact

Feel free to drop me an email for any questions, bug reports, or ideas of model extension: guillaume.jouvet at geo.uzh.ch

You might also like...
This project deploys a yolo fastest model in the form of tflite on raspberry 3b+. The model is from another repository of mine called -Trash-Classification-Car
This project deploys a yolo fastest model in the form of tflite on raspberry 3b+. The model is from another repository of mine called -Trash-Classification-Car

Deploy-yolo-fastest-tflite-on-raspberry 觉得有用的话可以顺手点个star嗷 这个项目将垃圾分类小车中的tflite模型移植到了树莓派3b+上面。 该项目主要是为了记录在树莓派部署yolo fastest tflite的流程 (之后有时间会尝试用C++部署来提升

MBPO (paper: When to trust your model: Model-based policy optimization) in offline RL settings

offline-MBPO This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settings Pape

A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.
A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.

chitra What is chitra? chitra (चित्र) is a multi-functional library for full-stack Deep Learning. It simplifies Model Building, API development, and M

RoMA: Robust Model Adaptation for Offline Model-based Optimization

RoMA: Robust Model Adaptation for Offline Model-based Optimization Implementation of RoMA: Robust Model Adaptation for Offline Model-based Optimizatio

An atmospheric growth and evolution model based on the EVo degassing model and FastChem 2.0

EVolve Linking planetary mantles to atmospheric chemistry through volcanism using EVo and FastChem. Overview EVolve is a linked mantle degassing and a

Arch-Net: Model Distillation for Architecture Agnostic Model Deployment

Arch-Net: Model Distillation for Architecture Agnostic Model Deployment The official implementation of Arch-Net: Model Distillation for Architecture A

MMRazor: a model compression toolkit for model slimming and AutoML
MMRazor: a model compression toolkit for model slimming and AutoML

Documentation: https://mmrazor.readthedocs.io/ English | 简体中文 Introduction MMRazor is a model compression toolkit for model slimming and AutoML, which

Cancer-and-Tumor-Detection-Using-Inception-model - In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks, specifically here the Inception model by google.
Cancer-and-Tumor-Detection-Using-Inception-model - In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks, specifically here the Inception model by google.

Cancer-and-Tumor-Detection-Using-Inception-model In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks

Stroke-predictions-ml-model - Machine learning model to predict individuals chances of having a stroke

stroke-predictions-ml-model machine learning model to predict individuals chance

Comments
  • Tensorflow and Keras installation

    Tensorflow and Keras installation

    Hi Guillaume,

    I was wondering why are Keras and Tensorflow installed with pip and not conda? ("Installing Python Packages" section of your README)

    Installing with pip in a conda environment can lead to complex issues. And here, all the packages you install with pip are available with conda`.

    question 
    opened by AdrienWehrle 2
  • Continuous integration: test `igm` installation on latest Ubuntu, MacOS and Windows

    Continuous integration: test `igm` installation on latest Ubuntu, MacOS and Windows

    HI Guillaume!

    here is a suggestion to include some Continuous integration tests for igm. Main modification in the creation of main.yml where the tests are automatically run in a Github Action. I also moved setup.py in the root folder, so users can directly run pip install -e . without having to look for the setup file.

    As you can see here, igm is successfully installed on the different matrices.

    I also formatted igm.py with Black. It doesn't change anything in the code, just homogenize the style.

    opened by AdrienWehrle 0
  • _tkinter.TclError

    _tkinter.TclError

    I'm running the example "paleo-alps" and get this error:

    Traceback (most recent call last):
      File "igm-run.py", line 94, in <module>
        glacier.animate_result('ex.nc','thk',save=True)
      File "/home/bowen/igm/src/igm.py", line 3625, in animate_result
        HTML(ani.to_html5_video())
      File "/home/bowen/anaconda3/envs/igm/lib/python3.8/site-packages/matplotlib/animation.py", line 1266, in to_html5_video
        self.save(str(path), writer=writer)
      File "/home/bowen/anaconda3/envs/igm/lib/python3.8/site-packages/matplotlib/animation.py", line 1063, in save
        with mpl.rc_context({'savefig.bbox': None}), \
      File "/home/bowen/anaconda3/envs/igm/lib/python3.8/contextlib.py", line 113, in __enter__
        return next(self.gen)
      File "/home/bowen/anaconda3/envs/igm/lib/python3.8/site-packages/matplotlib/animation.py", line 229, in saving
        self.setup(fig, outfile, dpi, *args, **kwargs)
      File "/home/bowen/anaconda3/envs/igm/lib/python3.8/site-packages/matplotlib/animation.py", line 315, in setup
        self._w, self._h = self._adjust_frame_size()
      File "/home/bowen/anaconda3/envs/igm/lib/python3.8/site-packages/matplotlib/animation.py", line 304, in _adjust_frame_size
        self.fig.set_size_inches(w, h, forward=True)
      File "/home/bowen/anaconda3/envs/igm/lib/python3.8/site-packages/matplotlib/figure.py", line 2961, in set_size_inches
        manager.resize(*(size * self.dpi).astype(int))
      File "/home/bowen/anaconda3/envs/igm/lib/python3.8/site-packages/matplotlib/backends/_backend_tk.py", line 506, in resize
        self.canvas._tkcanvas.configure(width=width, height=height)
      File "/home/bowen/anaconda3/envs/igm/lib/python3.8/tkinter/__init__.py", line 1646, in configure
        return self._configure('configure', cnf, kw)
      File "/home/bowen/anaconda3/envs/igm/lib/python3.8/tkinter/__init__.py", line 1636, in _configure
        self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
    _tkinter.TclError: invalid command name ".!canvas"
    

    I searched it but cannot figure it out. Do you have any idea about this?

    opened by bowenbelongstonature 0
Releases(v1.0.1)
  • v1.0.1(Oct 4, 2022)

    This version was substantially improved after the original code including i) adding of new model components ii) adding inverse modelling iii) documentation, ect. Author: G. Jouvet.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Nov 29, 2021)

    This is the original code of the Instructed Glacier Model (IGM) written by G. Jouvet with support from G. Cordonnier for the TensorFlow implementation.

    Source code(tar.gz)
    Source code(zip)
🤗 Push your spaCy pipelines to the Hugging Face Hub

spacy-huggingface-hub: Push your spaCy pipelines to the Hugging Face Hub This package provides a CLI command for uploading any trained spaCy pipeline

Explosion 30 Oct 09, 2022
This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametric Head Model (CVPR 2022)".

HeadNeRF: A Real-time NeRF-based Parametric Head Model This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametr

294 Jan 01, 2023
Pytorch code for our paper "Feedback Network for Image Super-Resolution" (CVPR2019)

Feedback Network for Image Super-Resolution [arXiv] [CVF] [Poster] Update: Our proposed Gated Multiple Feedback Network (GMFN) will appear in BMVC2019

Zhen Li 539 Jan 06, 2023
A treasure chest for visual recognition powered by PaddlePaddle

简体中文 | English PaddleClas 简介 飞桨图像识别套件PaddleClas是飞桨为工业界和学术界所准备的一个图像识别任务的工具集,助力使用者训练出更好的视觉模型和应用落地。 近期更新 2021.11.1 发布PP-ShiTu技术报告,新增饮料识别demo 2021.10.23 发

4.6k Dec 31, 2022
Official PyTorch implementation of Data-free Knowledge Distillation for Object Detection, WACV 2021.

Introduction This repository is the official PyTorch implementation of Data-free Knowledge Distillation for Object Detection, WACV 2021. Data-free Kno

NVIDIA Research Projects 50 Jan 05, 2023
[NeurIPS 2020] Official repository for the project "Listening to Sound of Silence for Speech Denoising"

Listening to Sounds of Silence for Speech Denoising Introduction This is the repository of the "Listening to Sounds of Silence for Speech Denoising" p

Henry Xu 40 Dec 20, 2022
code for paper "Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning" by Zhongzheng Ren*, Raymond A. Yeh*, Alexander G. Schwing.

Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning Overview This code is for paper: Not All Unlabeled Data are Equa

Jason Ren 22 Nov 23, 2022
PyTorch implementation of "A Simple Baseline for Low-Budget Active Learning".

A Simple Baseline for Low-Budget Active Learning This repository is the implementation of A Simple Baseline for Low-Budget Active Learning. In this pa

10 Nov 14, 2022
[TNNLS 2021] The official code for the paper "Learning Deep Context-Sensitive Decomposition for Low-Light Image Enhancement"

CSDNet-CSDGAN this is the code for the paper "Learning Deep Context-Sensitive Decomposition for Low-Light Image Enhancement" Environment Preparing pyt

Jiaao Zhang 17 Nov 05, 2022
Unsupervised Video Interpolation using Cycle Consistency

Unsupervised Video Interpolation using Cycle Consistency Project | Paper | YouTube Unsupervised Video Interpolation using Cycle Consistency Fitsum A.

NVIDIA Corporation 100 Nov 30, 2022
N-gram models- Unsmoothed, Laplace, Deleted Interpolation

N-gram models- Unsmoothed, Laplace, Deleted Interpolation

Ravika Nagpal 1 Jan 04, 2022
This is the code for CVPR 2021 oral paper: Jigsaw Clustering for Unsupervised Visual Representation Learning

JigsawClustering Jigsaw Clustering for Unsupervised Visual Representation Learning Pengguang Chen, Shu Liu, Jiaya Jia Introduction This project provid

DV Lab 73 Sep 18, 2022
The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"

Hierarchical Token Semantic Audio Transformer Introduction The Code Repository for "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound

Knut(Ke) Chen 134 Jan 01, 2023
5 Jan 05, 2023
A Number Recognition algorithm

Paddle-VisualAttention Results_Compared SVHN Dataset Methods Steps GPU Batch Size Learning Rate Patience Decay Step Decay Rate Training Speed (FPS) Ac

1 Nov 12, 2021
An official implementation of MobileStyleGAN in PyTorch

MobileStyleGAN: A Lightweight Convolutional Neural Network for High-Fidelity Image Synthesis Official PyTorch Implementation The accompanying videos c

Sergei Belousov 602 Jan 07, 2023
Simple reference implementation of GraphSAGE.

Reference PyTorch GraphSAGE Implementation Author: William L. Hamilton Basic reference PyTorch implementation of GraphSAGE. This reference implementat

William L Hamilton 861 Jan 06, 2023
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

DSAC* for Visual Camera Re-Localization (RGB or RGB-D) Introduction Installation Data Structure Supported Datasets 7Scenes 12Scenes Cambridge Landmark

Visual Learning Lab 143 Dec 22, 2022
PyTorch implementation of ENet

PyTorch-ENet PyTorch (v1.1.0) implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, ported from the lua-torc

David Silva 333 Dec 29, 2022
Losslandscapetaxonomy - Taxonomizing local versus global structure in neural network loss landscapes

Taxonomizing local versus global structure in neural network loss landscapes Int

Yaoqing Yang 8 Dec 30, 2022