Deep-learning X-Ray Micro-CT image enhancement, pore-network modelling and continuum modelling

Overview

EDSR modelling

A Github repository for deep-learning image enhancement, pore-network and continuum modelling from X-Ray Micro-CT images. The repository contains all code necessary to recreate the results in the paper [1]. The images that are used in various parts of the code are found on Zenodo at DOI: 10.5281/zenodo.5542624. There is previous experimental and modelling work performed in the papers of [2,3].

Workflow Summary of the workflow, flowing from left to right. First, the EDSR network is trained & tested on paired LR and HR data to produce SR data which emulates the HR data. Second, the trained EDSR is applied to the whole core LR data to generate a whole core SR image. A pore-network model (PNM) is then used to generate 3D continuum properties at REV scale from the post-processed image. Finally, the 3D digital model is validated through continuum modelling (CM) of the muiltiphase flow experiments.

The workflow image above summarises the general approach. We list the detailed steps in the workflow below, linking to specific files and folders where necesary.

1. Generating LR, Cubic and HR data

The low resolution (LR) and high resolution (HR) can be downloaded from Zenodo at DOI: 10.5281/zenodo.5542624. The following code can then be run:

  • A0_0_0_Generate_LR_bicubic.m This code generates Cubic interpolation images from LR images, artifically decreasing the pixel size and interpolating, for use in comparison to HR and SR images later.
  • A0_0_1_Generate_filtered_images_LR_HR.m. This code performs non-local means filtering of the LR, cubic and HR images, given the settings in the paper [1].

2. EDSR network training

The 3d EDSR (Enhanced Deep Super Resolution) convolution neural network used in this work is based on the implementation of the CVPR2017 workshop Paper: "Enhanced Deep Residual Networks for Single Image Super-Resolution" (https://arxiv.org/pdf/1707.02921.pdf) using PyTorch.

The folder 3D_EDSR contains the EDSR network training & testing code. The code is written in Python, and tested in the following environment:

  • Windows 10
  • Python 3.7.4
  • Pytorch 1.8.1
  • cuda 11.2
  • cudnn 8.1.0

The Jupyter notebook Train_review.ipynb, contains cells with the individual .py codes copied in to make one continuous workflow that can be run for EDSR training and validation. In this file, and those listed below, the LR and HR data used for training should be stored in the top level of 3D_EDSR, respectively, as:

  • Core1_Subvol1_LR.tif
  • Core1_Subvol1_HR.tif

To generate suitable training images (sub-slices of the full data above), the following code can be run:

  • train_image_generator.py. This generates LR and registered x3 HR sub-images for EDSR training, sub-image sizes are of flexible size, dependent on the pore-structure. The LR/HR sub-images are separated into two different folders LR and HR

The EDSR model can then be trained on the LR and HR sub-sampled data via:

  • main_edsr.py. This trains the EDSR network on the LR/HR data. It requires the code load_data.py, which is the sub-image loader for EDSR training. It also requires the 3D EDSR model structure code edsr_x3_3d.py. The code then saves the trained network as 3D_EDSR.pt. The version supplied here is that trained and used in the paper.

To view the training loss performance, the data can be output and saved to .txt files. The data can then be used in:

3. EDSR network verification

The trained EDSR network at 3D_EDSR.pt can be verified by generating SR images from a different LR image to that which was used in training. Here we use the second subvolume from core 1, found on Zenodo at DOI: 10.5281/zenodo.5542624:

  • Core1_Subvol2_LR.tif

The trained EDSR model can then be run on the LR data using:

  • validation_image_generator.py. This creates input validation LR images. The validation LR images have large size in x,y axes and small size in z axis to reduce computational cost.
  • main_edsr_validation.py. The validation LR images are used with the trained EDSR model to generate 3D SR subimages. These can be saved in the folder SR_subdata as the Jupyter notebook Train_review.ipynb does. The SR subimages are then stacked to form a whole 3D SR image.

Following the generation of suitable verification images, various metrics can be calculated from the images to judge performance against the true HR data:

Following the generation of these metrics, several plotting codes can be run to compare LR, Cubic, HR and SR results:

4. Continuum modelling and validation

After the EDSR images have been verified using the image metrics and pore-network model simulations, the EDSR network can be used to generate continuum scale models, for validation with experimental results. We compare the simulations using the continuum models to the accompanying experimental dataset in [2]. First, the following codes are run on each subvolume of the whole core images, as per the verification section:

The subvolume (and whole-core) images can be found on the Digital Rocks Portal and on the BGS National Geoscience Data Centre, respectively. This will result in SR images (with the pre-exising LR) of each subvolume in both cores 1 and 2. After this, pore-network modelling can be performed using:

The whole core results can then be compiled into a single dataset .mat file using:

To visualise the petrophysical properties for the whole core, the following code can be run:

Continuum models can then be generated using the 3D petrophysical properties. We generate continuum properties for the multiphase flow simulator CMG IMEX. The simulator reads in .dat files which use .inc files of the 3D petrophsical properties to perform continuum scale immiscible drainage multiphase flow simulations, at fixed fractional flow of decane and brine. The simulations run until steady-state, and the results can be compared to the experiments on a 1:1 basis. The following codes generate, and run the files in CMG IMEX (has to be installed seperately):

Example CMG IMEX simulation files, which are generated from these codes, are given for core 1 in the folder CMG_IMEX_files

The continuum simulation outputs can be compared to the experimental results, namely 3D saturations and pressures in the form of absolute and relative permeability. The whole core results from our simulations are summarised in the file Whole_core_results_exp_sim.xlsx along with experimental results. The following code can be run:

  • A1_1_2_Plot_IMEX_continuum_results.m. This plots graphs of the continuum model results from above in terms of 3D saturations and pressure compared to the experimental results. The experimental data is stored in Exp_data.

5. Extra Folders

  • Functions. This contains functions used in some of the .m files above.
  • media. This folder contains the workflow image.

6. References

  1. Jackson, S.J, Niu, Y., Manoorkar, S., Mostaghimi, P. and Armstrong, R.T. 2021. Deep learning of multi-resolution X-Ray micro-CT images for multi-scale modelling.
  2. Jackson, S.J., Lin, Q. and Krevor, S. 2020. Representative Elementary Volumes, Hysteresis, and Heterogeneity in Multiphase Flow from the Pore to Continuum Scale. Water Resources Research, 56(6), e2019WR026396
  3. Zahasky, C., Jackson, S.J., Lin, Q., and Krevor, S. 2020. Pore network model predictions of Darcy‐scale multiphase flow heterogeneity validated by experiments. Water Resources Research, 56(6), e e2019WR026708.
Owner
Samuel Jackson
Research Scientist @CSIRO Energy
Samuel Jackson
Adversarial Graph Augmentation to Improve Graph Contrastive Learning

ADGCL : Adversarial Graph Augmentation to Improve Graph Contrastive Learning Introduction This repo contains the Pytorch [1] implementation of Adversa

susheel suresh 62 Nov 19, 2022
Contrastive Learning for Many-to-many Multilingual Neural Machine Translation(mCOLT/mRASP2), ACL2021

Contrastive Learning for Many-to-many Multilingual Neural Machine Translation(mCOLT/mRASP2), ACL2021 The code for training mCOLT/mRASP2, a multilingua

104 Jan 01, 2023
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

ObjProp Introduction This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Insta

Anirudh S Chakravarthy 6 May 03, 2022
Template repository to build PyTorch projects from source on any version of PyTorch/CUDA/cuDNN.

The Ultimate PyTorch Source-Build Template Translations: 한국어 TL;DR PyTorch built from source can be x4 faster than a naïve PyTorch install. This repos

Joonhyung Lee/이준형 651 Dec 12, 2022
DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene.

DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene. We achieve NeRF-comparable novel-view synthesis quality with super-fast convergence.

sunset 709 Dec 31, 2022
Repository containing detailed experiments related to the paper "Memotion Analysis through the Lens of Joint Embedding".

Memotion Analysis Through The Lens Of Joint Embedding This repository contains the experiments conducted as described in the paper 'Memotion Analysis

Nethra Gunti 1 Mar 16, 2022
This is the implementation of GGHL (A General Gaussian Heatmap Labeling for Arbitrary-Oriented Object Detection)

GGHL: A General Gaussian Heatmap Labeling for Arbitrary-Oriented Object Detection This is the implementation of GGHL 👋 👋 👋 [Arxiv] [Google Drive][B

551 Dec 31, 2022
Sample Prior Guided Robust Model Learning to Suppress Noisy Labels

PGDF This repo is the official implementation of our paper "Sample Prior Guided Robust Model Learning to Suppress Noisy Labels ". Citation If you use

CVSM Group - email: <a href=[email protected]"> 22 Dec 23, 2022
Simple and ready-to-use tutorials for TensorFlow

TensorFlow World To support maintaining and upgrading this project, please kindly consider Sponsoring the project developer. Any level of support is a

Amirsina Torfi 4.5k Dec 23, 2022
Jittor Medical Segmentation Lib -- The assignment of Pattern Recognition course (2021 Spring) in Tsinghua University

THU模式识别2021春 -- Jittor 医学图像分割 模型列表 本仓库收录了课程作业中同学们采用jittor框架实现的如下模型: UNet SegNet DeepLab V2 DANet EANet HarDNet及其改动HarDNet_alter PSPNet OCNet OCRNet DL

48 Dec 26, 2022
Pretraining on Dynamic Graph Neural Networks

Pretraining on Dynamic Graph Neural Networks Our article is PT-DGNN and the code is modified based on GPT-GNN Requirements python 3.6 Ubuntu 18.04.5 L

7 Dec 17, 2022
New approach to benchmark VQA models

VQA Benchmarking This repository contains the web application & the python interface to evaluate VQA models. Documentation Please see the documentatio

4 Jul 25, 2022
Semantic code search implementation using Tensorflow framework and the source code data from the CodeSearchNet project

Semantic Code Search Semantic code search implementation using Tensorflow framework and the source code data from the CodeSearchNet project. The model

Chen Wu 24 Nov 29, 2022
An index of algorithms for learning causality with data

awesome-causality-algorithms An index of algorithms for learning causality with data. Please cite our survey paper if this index is helpful. @article{

Ruocheng Guo 2.3k Jan 08, 2023
Task-related Saliency Network For Few-shot learning

Task-related Saliency Network For Few-shot learning This is an official implementation in Tensorflow of TRSN. Abstract An essential cue of human wisdo

1 Nov 18, 2021
HiFi++: a Unified Framework for Neural Vocoding, Bandwidth Extension and Speech Enhancement

HiFi++ : a Unified Framework for Neural Vocoding, Bandwidth Extension and Speech Enhancement This is the unofficial implementation of Vocoder part of

Rishikesh (ऋषिकेश) 118 Dec 29, 2022
Pytorch implementation of CoCon: A Self-Supervised Approach for Controlled Text Generation

COCON_ICLR2021 This is our Pytorch implementation of COCON. CoCon: A Self-Supervised Approach for Controlled Text Generation (ICLR 2021) Alvin Chan, Y

alvinchangw 79 Dec 18, 2022
Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala, S. Krastanov, M. Eichenfield, and D. R. Englund, 2022

Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala,

Stefan Krastanov 1 Jan 17, 2022
This is an official pytorch implementation of Lite-HRNet: A Lightweight High-Resolution Network.

Lite-HRNet: A Lightweight High-Resolution Network Introduction This is an official pytorch implementation of Lite-HRNet: A Lightweight High-Resolution

HRNet 675 Dec 25, 2022
Make your own game in a font!

Project structure. Included is a suite of tools to create font games. Tutorial: For a quick tutorial about how to make your own game go here For devel

Michael Mulet 125 Dec 04, 2022