Deep-learning X-Ray Micro-CT image enhancement, pore-network modelling and continuum modelling

Overview

EDSR modelling

A Github repository for deep-learning image enhancement, pore-network and continuum modelling from X-Ray Micro-CT images. The repository contains all code necessary to recreate the results in the paper [1]. The images that are used in various parts of the code are found on Zenodo at DOI: 10.5281/zenodo.5542624. There is previous experimental and modelling work performed in the papers of [2,3].

Workflow Summary of the workflow, flowing from left to right. First, the EDSR network is trained & tested on paired LR and HR data to produce SR data which emulates the HR data. Second, the trained EDSR is applied to the whole core LR data to generate a whole core SR image. A pore-network model (PNM) is then used to generate 3D continuum properties at REV scale from the post-processed image. Finally, the 3D digital model is validated through continuum modelling (CM) of the muiltiphase flow experiments.

The workflow image above summarises the general approach. We list the detailed steps in the workflow below, linking to specific files and folders where necesary.

1. Generating LR, Cubic and HR data

The low resolution (LR) and high resolution (HR) can be downloaded from Zenodo at DOI: 10.5281/zenodo.5542624. The following code can then be run:

  • A0_0_0_Generate_LR_bicubic.m This code generates Cubic interpolation images from LR images, artifically decreasing the pixel size and interpolating, for use in comparison to HR and SR images later.
  • A0_0_1_Generate_filtered_images_LR_HR.m. This code performs non-local means filtering of the LR, cubic and HR images, given the settings in the paper [1].

2. EDSR network training

The 3d EDSR (Enhanced Deep Super Resolution) convolution neural network used in this work is based on the implementation of the CVPR2017 workshop Paper: "Enhanced Deep Residual Networks for Single Image Super-Resolution" (https://arxiv.org/pdf/1707.02921.pdf) using PyTorch.

The folder 3D_EDSR contains the EDSR network training & testing code. The code is written in Python, and tested in the following environment:

  • Windows 10
  • Python 3.7.4
  • Pytorch 1.8.1
  • cuda 11.2
  • cudnn 8.1.0

The Jupyter notebook Train_review.ipynb, contains cells with the individual .py codes copied in to make one continuous workflow that can be run for EDSR training and validation. In this file, and those listed below, the LR and HR data used for training should be stored in the top level of 3D_EDSR, respectively, as:

  • Core1_Subvol1_LR.tif
  • Core1_Subvol1_HR.tif

To generate suitable training images (sub-slices of the full data above), the following code can be run:

  • train_image_generator.py. This generates LR and registered x3 HR sub-images for EDSR training, sub-image sizes are of flexible size, dependent on the pore-structure. The LR/HR sub-images are separated into two different folders LR and HR

The EDSR model can then be trained on the LR and HR sub-sampled data via:

  • main_edsr.py. This trains the EDSR network on the LR/HR data. It requires the code load_data.py, which is the sub-image loader for EDSR training. It also requires the 3D EDSR model structure code edsr_x3_3d.py. The code then saves the trained network as 3D_EDSR.pt. The version supplied here is that trained and used in the paper.

To view the training loss performance, the data can be output and saved to .txt files. The data can then be used in:

3. EDSR network verification

The trained EDSR network at 3D_EDSR.pt can be verified by generating SR images from a different LR image to that which was used in training. Here we use the second subvolume from core 1, found on Zenodo at DOI: 10.5281/zenodo.5542624:

  • Core1_Subvol2_LR.tif

The trained EDSR model can then be run on the LR data using:

  • validation_image_generator.py. This creates input validation LR images. The validation LR images have large size in x,y axes and small size in z axis to reduce computational cost.
  • main_edsr_validation.py. The validation LR images are used with the trained EDSR model to generate 3D SR subimages. These can be saved in the folder SR_subdata as the Jupyter notebook Train_review.ipynb does. The SR subimages are then stacked to form a whole 3D SR image.

Following the generation of suitable verification images, various metrics can be calculated from the images to judge performance against the true HR data:

Following the generation of these metrics, several plotting codes can be run to compare LR, Cubic, HR and SR results:

4. Continuum modelling and validation

After the EDSR images have been verified using the image metrics and pore-network model simulations, the EDSR network can be used to generate continuum scale models, for validation with experimental results. We compare the simulations using the continuum models to the accompanying experimental dataset in [2]. First, the following codes are run on each subvolume of the whole core images, as per the verification section:

The subvolume (and whole-core) images can be found on the Digital Rocks Portal and on the BGS National Geoscience Data Centre, respectively. This will result in SR images (with the pre-exising LR) of each subvolume in both cores 1 and 2. After this, pore-network modelling can be performed using:

The whole core results can then be compiled into a single dataset .mat file using:

To visualise the petrophysical properties for the whole core, the following code can be run:

Continuum models can then be generated using the 3D petrophysical properties. We generate continuum properties for the multiphase flow simulator CMG IMEX. The simulator reads in .dat files which use .inc files of the 3D petrophsical properties to perform continuum scale immiscible drainage multiphase flow simulations, at fixed fractional flow of decane and brine. The simulations run until steady-state, and the results can be compared to the experiments on a 1:1 basis. The following codes generate, and run the files in CMG IMEX (has to be installed seperately):

Example CMG IMEX simulation files, which are generated from these codes, are given for core 1 in the folder CMG_IMEX_files

The continuum simulation outputs can be compared to the experimental results, namely 3D saturations and pressures in the form of absolute and relative permeability. The whole core results from our simulations are summarised in the file Whole_core_results_exp_sim.xlsx along with experimental results. The following code can be run:

  • A1_1_2_Plot_IMEX_continuum_results.m. This plots graphs of the continuum model results from above in terms of 3D saturations and pressure compared to the experimental results. The experimental data is stored in Exp_data.

5. Extra Folders

  • Functions. This contains functions used in some of the .m files above.
  • media. This folder contains the workflow image.

6. References

  1. Jackson, S.J, Niu, Y., Manoorkar, S., Mostaghimi, P. and Armstrong, R.T. 2021. Deep learning of multi-resolution X-Ray micro-CT images for multi-scale modelling.
  2. Jackson, S.J., Lin, Q. and Krevor, S. 2020. Representative Elementary Volumes, Hysteresis, and Heterogeneity in Multiphase Flow from the Pore to Continuum Scale. Water Resources Research, 56(6), e2019WR026396
  3. Zahasky, C., Jackson, S.J., Lin, Q., and Krevor, S. 2020. Pore network model predictions of Darcy‐scale multiphase flow heterogeneity validated by experiments. Water Resources Research, 56(6), e e2019WR026708.
Owner
Samuel Jackson
Research Scientist @CSIRO Energy
Samuel Jackson
This repository builds a basic vision transformer from scratch so that one beginner can understand the theory of vision transformer.

vision-transformer-from-scratch This repository includes several kinds of vision transformers from scratch so that one beginner can understand the the

1 Dec 24, 2021
[ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, Zhangyang Wang

Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective [PDF] Wuyang Chen, Xinyu Gong, Zhangyang Wang In ICLR 2

VITA 156 Nov 28, 2022
This is the face keypoint train code of project face-detection-project

face-key-point-pytorch 1. Data structure The structure of landmarks_jpg is like below: |--landmarks_jpg |----AFW |------AFW_134212_1_0.jpg |------AFW_

I‘m X 3 Nov 27, 2022
The Python ensemble sampling toolkit for affine-invariant MCMC

emcee The Python ensemble sampling toolkit for affine-invariant MCMC emcee is a stable, well tested Python implementation of the affine-invariant ense

Dan Foreman-Mackey 1.3k Dec 31, 2022
RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality?

RaftMLP RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality? By Yuki Tatsunami and Masato Taki (Rikkyo University) [arxiv]

Okojo 20 Aug 31, 2022
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)

RobustNet (CVPR 2021 Oral): Official Project Webpage Codes and pretrained models will be released soon. This repository provides the official PyTorch

Sungha Choi 173 Dec 21, 2022
PyTorch implementation of the implicit Q-learning algorithm (IQL)

Implicit-Q-Learning (IQL) PyTorch implementation of the implicit Q-learning algorithm IQL (Paper) Currently only implemented for online learning. Offl

Sebastian Dittert 27 Dec 30, 2022
Differentiable molecular simulation of proteins with a coarse-grained potential

Differentiable molecular simulation of proteins with a coarse-grained potential This repository contains the learned potential, simulation scripts and

UCL Bioinformatics Group 44 Dec 10, 2022
Neural Oblivious Decision Ensembles

Neural Oblivious Decision Ensembles A supplementary code for anonymous ICLR 2020 submission. What does it do? It learns deep ensembles of oblivious di

25 Sep 21, 2022
A new GCN model for Point Cloud Analyse

Pytorch Implementation of PointNet and PointNet++ This repo is implementation for VA-GCN in pytorch. Classification (ModelNet10/40) Data Preparation D

12 Feb 02, 2022
ICLR 2021: Pre-Training for Context Representation in Conversational Semantic Parsing

SCoRe: Pre-Training for Context Representation in Conversational Semantic Parsing This repository contains code for the ICLR 2021 paper "SCoRE: Pre-Tr

Microsoft 28 Oct 02, 2022
SpineAI Bilsky Grading With Python

SpineAI-Bilsky-Grading SpineAI Paper with Code 📫 Contact Address correspondence to J.T.P.D.H. (e-mail: james_hallinan AT nuhs.edu.sg) Disclaimer This

<a href=[email protected]"> 2 Dec 16, 2021
Mini-hmc-jax - A simple implementation of Hamiltonian Monte Carlo in JAX

mini-hmc-jax This is a simple implementation of Hamiltonian Monte Carlo in JAX t

Martin Marek 6 Mar 03, 2022
LSUN Dataset Documentation and Demo Code

LSUN Please check LSUN webpage for more information about the dataset. Data Release All the images in one category are stored in one lmdb database fil

Fisher Yu 426 Jan 02, 2023
A very impractical 3D rendering engine that runs in the python terminal.

Terminal-3D-Render A very impractical 3D rendering engine that runs in the python terminal. do NOT try to run this program using the standard python I

23 Dec 31, 2022
Official re-implementation of the Calibrated Adversarial Refinement model described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Official re-implementation of the Calibrated Adversarial Refinement model described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Elias Kassapis 31 Nov 22, 2022
Model parallel transformers in Jax and Haiku

Mesh Transformer Jax A haiku library using the new(ly documented) xmap operator in Jax for model parallelism of transformers. See enwik8_example.py fo

Ben Wang 4.8k Jan 01, 2023
A library for Deep Learning Implementations and utils

deeply A Deep Learning library Table of Contents Features Quick Start Usage License Features Python 2.7+ and Python 3.4+ compatible. Quick Start $ pip

Achilles Rasquinha 1 Dec 12, 2022
Unofficial implementation of Proxy Anchor Loss for Deep Metric Learning

Proxy Anchor Loss for Deep Metric Learning Unofficial pytorch, tensorflow and mxnet implementations of Proxy Anchor Loss for Deep Metric Learning. Not

Geonmo Gu 3 Jun 09, 2021
Flaxformer: transformer architectures in JAX/Flax

Flaxformer is a transformer library for primarily NLP and multimodal research at Google.

Google 116 Jan 05, 2023