A framework for joint super-resolution and image synthesis, without requiring real training data

Related tags

Deep LearningSynthSR
Overview

SynthSR

This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The method can also be configured to achieve denoising and bias field correction.

The network takes synthetic scans generated on the fly as inputs, and can be trained to regress either real or synthetic target scans. The synthetic scans are obtained by sampling a generative model building on the SynthSeg [1] package, which we really encourage you to have a look at!


In short, synthetic scans are generated at each mini-batch by: 1) randomly selecting a label map among of pool of training segmentations, 2) spatially deforming it in 3D, 3) sampling a Gaussian Mixture Model (GMM) conditioned on the deformed label map (see Figure 1 below), and 4) corrupting with a random bias field. This gives us a synthetic scan at high resolution (HR). We then simulate thick slice spacing by blurring and downsampling it to low resolution (LR). In SR, we then train a network to learn the mapping between LR data (possibly multimodal, hence the joint synthesis) and HR synthetic scans. Moreover If real images are available along with the training label maps, we can learn to regress the real images instead.


Training overview Figure 1: overview of SynthSR


Tutorials for Generation and Training

This repository contains code to train your own network for SR or joint SR and synthesis. Because the training function has a lot of options, we provide here some tutorials to familiarise yourself with the different training/generation parameters. We emphasise that we provide example training data along with these scripts: 5 preprocessed publicly available T1 scans at 1mm isotropic resolution [2] with corresponding label maps obtained with FreeSurfer [3]. The tutorials can be found in scripts, and they include:

  • Six generation scripts corresponding to different use cases (see Figure 2 below). We recommend to go through them all, (even if you're only interested in case 1), since we successively introduce different functionalities as we go through.

  • One training script, explaining the main training parameters.

  • One script explaining how to estimate the parameters governing the GMM, in case you wish to train a model on your own data.


Training overview Figure 2: Examples generated by running the tutorials on the provided data [2]. For each use case, we show the synhtetic images used as inputs to the network, as well as the regression target.


Content

  • SynthSR: this is the main folder containing the generative model and training function:

    • labels_to_image_model.py: builds the generative model.

    • brain_generator.py: contains the class BrainGenerator, which is a wrapper around the model. New images can simply be generated by instantiating an object of this class, and calling the method generate_image().

    • model_inputs.py: prepares the inputs of the generative model.

    • training.py: contains the function to train the network. All training parameters are explained there.

    • metrics_model.py: contains a Keras model that implements diffrent loss functions.

    • estimate_priors.py: contains functions to estimate the prior distributions of the GMM parameters.

  • data: this folder contains the data for the tutorials (T1 scans [2], corresponding FreeSurfer segmentations and some other useful files)

  • script: additionally to the tutorials, we also provide a script to launch trainings from the terminal

  • ext: contains external packages.


Requirements

This code relies on several external packages (already included in \ext):

  • lab2im: contains functions for data augmentation, and a simple version of the generative model, on which we build to build label_to_image_model [1]

  • neuron: contains functions for deforming, and resizing tensors, as well as functions to build the segmentation network [4,5].

  • pytool-lib: library required by the neuron package.

All the other requirements are listed in requirements.txt. We list here the most important dependencies:

  • tensorflow-gpu 2.0
  • tensorflow_probability 0.8
  • keras > 2.0
  • cuda 10.0 (required by tensorflow)
  • cudnn 7.0
  • nibabel
  • numpy, scipy, sklearn, tqdm, pillow, matplotlib, ipython, ...

Citation/Contact

This repository contains the code related to a submission that is still under review.

If you have any question regarding the usage of this code, or any suggestions to improve it you can contact us at:
[email protected]


References

[1] A Learning Strategy for Contrast-agnostic MRI Segmentation
Benjamin Billot, Douglas N. Greve, Koen Van Leemput, Bruce Fischl, Juan Eugenio Iglesias*, Adrian V. Dalca*
*contributed equally
MIDL 2020

[2] A novel in vivo atlas of human hippocampal subfields usinghigh-resolution 3 T magnetic resonance imaging
J. Winterburn, J. Pruessner, S. Chavez, M. Schira, N. Lobaugh, A. Voineskos, M. Chakravarty
NeuroImage (2013)

[3] FreeSurfer
Bruce Fischl
NeuroImage (2012)

[4] Anatomical Priors in Convolutional Networks for Unsupervised Biomedical Segmentation
Adrian V. Dalca, John Guttag, Mert R. Sabuncu
CVPR 2018

[5] Unsupervised Data Imputation via Variational Inference of Deep Subspaces
Adrian V. Dalca, John Guttag, Mert R. Sabuncu
Arxiv preprint (2019)

The codebase for Data-driven general-purpose voice activity detection.

Data driven GPVAD Repository for the work in TASLP 2021 Voice activity detection in the wild: A data-driven approach using teacher-student training. S

Heinrich Dinkel 75 Nov 27, 2022
[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)

Are Transformers More Robust Than CNNs? Pytorch implementation for NeurIPS 2021 Paper: Are Transformers More Robust Than CNNs? Our implementation is b

Yutong Bai 145 Dec 01, 2022
​ This is the Pytorch implementation of Progressive Attentional Manifold Alignment.

PAMA This is the Pytorch implementation of Progressive Attentional Manifold Alignment. Requirements python 3.6 pytorch 1.2.0+ PIL, numpy, matplotlib C

98 Nov 15, 2022
JORLDY an open-source Reinforcement Learning (RL) framework provided by KakaoEnterprise

Repository for Open Source Reinforcement Learning Framework JORLDY

Kakao Enterprise Corp. 330 Dec 30, 2022
PyTorch implementation of SimSiam: Exploring Simple Siamese Representation Learning

SimSiam: Exploring Simple Siamese Representation Learning This is a PyTorch implementation of the SimSiam paper: @Article{chen2020simsiam, author =

Facebook Research 834 Dec 30, 2022
Code for technical report "An Improved Baseline for Sentence-level Relation Extraction".

RE_improved_baseline Code for technical report "An Improved Baseline for Sentence-level Relation Extraction". Requirements torch = 1.8.1 transformers

Wenxuan Zhou 74 Nov 29, 2022
Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORAL)

Scribble-Supervised LiDAR Semantic Segmentation Dataset and code release for the paper Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORA

102 Dec 25, 2022
Global Rhythm Style Transfer Without Text Transcriptions

Global Prosody Style Transfer Without Text Transcriptions This repository provides a PyTorch implementation of AutoPST, which enables unsupervised glo

Kaizhi Qian 193 Dec 30, 2022
A new codebase for Group Activity Recognition. It contains codes for ICCV 2021 paper: Spatio-Temporal Dynamic Inference Network for Group Activity Recognition and some other methods.

Spatio-Temporal Dynamic Inference Network for Group Activity Recognition The source codes for ICCV2021 Paper: Spatio-Temporal Dynamic Inference Networ

40 Dec 12, 2022
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch

🦩 Flamingo - Pytorch Implementation of Flamingo, state-of-the-art few-shot visual question answering attention net, in Pytorch. It will include the p

Phil Wang 630 Dec 28, 2022
We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction

We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction. This repository aims to give easy access to state-of-the-art pre-train

GMUM 90 Jan 08, 2023
PyTorchMemTracer - Depict GPU memory footprint during DNN training of PyTorch

A Memory Tracer For PyTorch OOM is a nightmare for PyTorch users. However, most

Jiarui Fang 9 Nov 14, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

524 Jan 08, 2023
AISTATS 2019: Confidence-based Graph Convolutional Networks for Semi-Supervised Learning

Confidence-based Graph Convolutional Networks for Semi-Supervised Learning Source code for AISTATS 2019 paper: Confidence-based Graph Convolutional Ne

MALL Lab (IISc) 56 Dec 03, 2022
Deep Ensemble Learning with Jet-Like architecture

Ransomware analysis using DEL with jet-like architecture comprising two CNN wings, a sparse AE tail, a non-linear PCA to produce a diverse feature space, and an MLP nose

Ahsen Nazir 2 Feb 06, 2022
Code for "Reconstructing 3D Human Pose by Watching Humans in the Mirror", CVPR 2021 oral

Reconstructing 3D Human Pose by Watching Humans in the Mirror Qi Fang*, Qing Shuai*, Junting Dong, Hujun Bao, Xiaowei Zhou CVPR 2021 Oral The videos a

ZJU3DV 178 Dec 13, 2022
Hippocampal segmentation using the UNet network for each axis

Hipposeg Hippocampal segmentation using the UNet network for each axis, inspired by https://github.com/MICLab-Unicamp/e2dhipseg Red: False Positive Gr

Juan Carlos Aguirre Arango 0 Sep 02, 2021
DFM: A Performance Baseline for Deep Feature Matching

DFM: A Performance Baseline for Deep Feature Matching Python (Pytorch) and Matlab (MatConvNet) implementations of our paper DFM: A Performance Baselin

143 Jan 02, 2023
In this project we use both Resnet and Self-attention layer for cat, dog and flower classification.

cdf_att_classification classes = {0: 'cat', 1: 'dog', 2: 'flower'} In this project we use both Resnet and Self-attention layer for cdf-Classification.

3 Nov 23, 2022
Differentiable rasterization applied to 3D model simplification tasks

nvdiffmodeling Differentiable rasterization applied to 3D model simplification tasks, as described in the paper: Appearance-Driven Automatic 3D Model

NVIDIA Research Projects 336 Dec 30, 2022