Equivariant Imaging: Learning Beyond the Range Space

Related tags

Deep LearningEI
Overview

Equivariant Imaging: Learning Beyond the Range Space

arXiv GitHub Stars

Equivariant Imaging: Learning Beyond the Range Space

Dongdong Chen, Julián Tachella, Mike E. Davies.

The University of Edinburgh

In ICCV 2021 (oral)

flexible flexible Figure: Learning to image from only measurements. Training an imaging network through just measurement consistency (MC) does not significantly improve the reconstruction over the simple pseudo-inverse (). However, by enforcing invariance in the reconstructed image set, equivariant imaging (EI) performs almost as well as a fully supervised network. Top: sparse view CT reconstruction, Bottom: pixel inpainting. PSNR is shown in top right corner of the images.

EI is a new self-supervised, end-to-end and physics-based learning framework for inverse problems with theoretical guarantees which leverages simple but fundamental priors about natural signals: symmetry and low-dimensionality.

Get quickly started

  • Please find the blog post for a quick introduction of EI.
  • Please find the core implementation of EI at './ei/closure/ei.py' (ei.py).
  • Please find the 30 lines code get_started.py and the toy cs example to get started with EI.

Overview

The problem: Imaging systems capture noisy measurements of a signal through a linear operator + . We aim to learn the reconstruction function where

  • NO groundtruth data for training as most inverse problems don’t have ground-truth;
  • only a single forward operator is available;
  • has a non-trivial nullspace (e.g. ).

The challenge:

  • We have NO information about the signal set outside the range space of or .
  • It is IMPOSSIBLE to learn the signal set using alone.

The motivation:

We assume the signal set has a low-dimensional structure and is invariant to a groups of transformations (orthgonal matrix, e.g. shift, rotation, scaling, reflection, etc.) related to a group , such that and the sets and are the same. For example,

  • natural images are shift invariant.
  • in CT/MRI data, organs can be imaged at different angles making the problem invariant to rotation.

Key observations:

  • Invariance provides access to implicit operators with potentially different range spaces: where and . Obviously, should also in the signal set.
  • The composition is equivariant to the group of transformations : .

overview Figure: Learning with and without equivariance in a toy 1D signal inpainting task. The signal set consists of different scaling of a triangular signal. On the left, the dataset does not enjoy any invariance, and hence it is not possible to learn the data distribution in the nullspace of . In this case, the network can inpaint the signal in an arbitrary way (in green), while achieving zero data consistency loss. On the right, the dataset is shift invariant. The range space of is shifted via the transformations , and the network inpaints the signal correctly.

Equivariant Imaging: to learn by using only measurements , all you need is to:

  • Define:
  1. define a transformation group based on the certain invariances to the signal set.
  2. define a neural reconstruction function , e.g. where is the (approximated) pseudo-inverse of and is a UNet-like neural net.
  • Calculate:
  1. calculate as the estimation of .
  2. calculate by transforming .
  3. calculate by reconstructing from its measurement .

flowchart

  • Train: finally learn the reconstruction function by solving: +

Requirements

All used packages are listed in the Anaconda environment.yml file. You can create an environment and run

conda env create -f environment.yml

Test

We provide the trained models used in the paper which can be downloaded at Google Drive. Please put the downloaded folder 'ckp' in the root path. Then evaluate the trained models by running

python3 demo_test_inpainting.py

and

python3 demo_test_ct.py

Train

To train EI for a given inverse problem (inpainting or CT), run

python3 demo_train.py --task 'inpainting'

or run a bash script to train the models for both CT and inpainting tasks.

bash train_paper_bash.sh

Train your models

To train your EI models on your dataset for a specific inverse problem (e.g. inpainting), run

python3 demo_train.py --h
  • Note: you may have to implement the forward model (physics) if you manage to solve a new inverse problem.
  • Note: you only need to specify some basic settings (e.g. the path of your training set).

Citation

@inproceedings{chen2021equivariant,
title = {Equivariant Imaging: Learning Beyond the Range Space},
	author={Chen, Dongdong and Tachella, Juli{\'a}n and Davies, Mike E},
	booktitle={Proceedings of the International Conference on Computer Vision (ICCV)},
	year = {2021}
}
Owner
Dongdong Chen
Machine learning, Inverse problems
Dongdong Chen
Code release for Local Light Field Fusion at SIGGRAPH 2019

Local Light Field Fusion Project | Video | Paper Tensorflow implementation for novel view synthesis from sparse input images. Local Light Field Fusion

1.1k Dec 27, 2022
Hypercomplex Neural Networks with PyTorch

HyperNets Hypercomplex Neural Networks with PyTorch: this repository would be a container for hypercomplex neural network modules to facilitate resear

Eleonora Grassucci 21 Dec 27, 2022
Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Johannes von Lindheim 3 Oct 29, 2022
This repository contains PyTorch code for Robust Vision Transformers.

This repository contains PyTorch code for Robust Vision Transformers.

117 Dec 07, 2022
Repo for CVPR2021 paper "QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information"

QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information by Masato Tamura, Hiroki Ohashi, and Tomoaki Yosh

105 Dec 23, 2022
Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

yzf 1 Jun 12, 2022
PyTorch Implement for Path Attention Graph Network

SPAGAN in PyTorch This is a PyTorch implementation of the paper "SPAGAN: Shortest Path Graph Attention Network" Prerequisites We prefer to create a ne

Yang Yiding 38 Dec 28, 2022
Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing

EGFNet Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing Dataset and Results Test maps: 百度网盘 提取码:zust Citation @ARTICLE{ author={Zhou,

ShaohuaDong 10 Dec 08, 2022
STEAL - Learning Semantic Boundaries from Noisy Annotations (CVPR 2019)

STEAL This is the official inference code for: Devil Is in the Edges: Learning Semantic Boundaries from Noisy Annotations David Acuna, Amlan Kar, Sanj

469 Dec 26, 2022
Building Ellee — A GPT-3 and Computer Vision Powered Talking Robotic Teddy Bear With Human Level Conversation Intelligence

Using an object detection and facial recognition system built on MobileNetSSDV2 and Dlib and running on an NVIDIA Jetson Nano, a GPT-3 model, Google Speech Recognition, Amazon Polly and servo motors,

24 Oct 26, 2022
Aiming at the common training datsets split, spectrum preprocessing, wavelength select and calibration models algorithm involved in the spectral analysis process

Aiming at the common training datsets split, spectrum preprocessing, wavelength select and calibration models algorithm involved in the spectral analysis process, a complete algorithm library is esta

Fu Pengyou 50 Jan 07, 2023
PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition

PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition The unofficial code of CDistNet. Now, we ha

25 Jul 20, 2022
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetu

3 Dec 05, 2022
Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

ImageProcessingTransformer Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

61 Jan 01, 2023
Stacs-ci - A set of modules to enable integration of STACS with commonly used CI / CD systems

Static Token And Credential Scanner CI Integrations What is it? STACS is a YARA

STACS 18 Aug 04, 2022
This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

Sergi Caelles 828 Jan 05, 2023
💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena.

Heidelberg-NLP 17 Nov 07, 2022
PyTorch code for ICPR 2020 paper Future Urban Scene Generation Through Vehicle Synthesis

Future urban scene generation through vehicle synthesis This repository contains Pytorch code for the ICPR2020 paper "Future Urban Scene Generation Th

Alessandro Simoni 4 Oct 11, 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks

P-tuning v2 P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks An optimized prompt tuning strategy achievi

THUDM 540 Dec 30, 2022
Hashformers is a framework for hashtag segmentation with transformers.

Hashtag segmentation is the task of automatically inserting the missing spaces between the words in a hashtag. Hashformers applies Transformer models

Ruan Chaves 41 Nov 09, 2022