Implementation of ICCV19 Paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network"

Overview

OANet implementation

Pytorch implementation of OANet for ICCV'19 paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network", by Jiahui Zhang, Dawei Sun, Zixin Luo, Anbang Yao, Lei Zhou, Tianwei Shen, Yurong Chen, Long Quan and Hongen Liao.

This paper focuses on establishing correspondences between two images. We introduce the DiffPool and DiffUnpool layers to capture the local context of unordered sparse correspondences in a learnable manner. By the collaborative use of DiffPool operator, we propose Order-Aware Filtering block which exploits the complex global context.

This repo contains the code and data for essential matrix estimation described in our ICCV paper. Besides, we also provide code for fundamental matrix estimation and the usage of side information (ratio test and mutual nearest neighbor check). Documents about this part will also be released soon.

Welcome bugs and issues!

If you find this project useful, please cite:

@article{zhang2019oanet,
  title={Learning Two-View Correspondences and Geometry Using Order-Aware Network},
  author={Zhang, Jiahui and Sun, Dawei and Luo, Zixin and Yao, Anbang and Zhou, Lei and Shen, Tianwei and Chen, Yurong and Quan, Long and Liao, Hongen},
  journal={International Conference on Computer Vision (ICCV)},
  year={2019}
}

Requirements

Please use Python 3.6, opencv-contrib-python (3.4.0.12) and Pytorch (>= 1.1.0). Other dependencies should be easily installed through pip or conda.

Example scripts

Run the demo

For a quick start, clone the repo and download the pretrained model.

git clone https://github.com/zjhthu/OANet.git 
cd OANet 
wget https://research.altizure.com/data/oanet_data/model_v2.tar.gz 
tar -xvf model_v2.tar.gz
cd model
wget https://research.altizure.com/data/oanet_data/sift-gl3d.tar.gz
tar -xvf sift-gl3d.tar.gz

Then run the fundamental matrix estimation demo.

cd ./demo && python demo.py

Generate training and testing data

First download YFCC100M dataset.

bash download_data.sh raw_data raw_data_yfcc.tar.gz 0 8
tar -xvf raw_data_yfcc.tar.gz

Download SUN3D testing (1.1G) and training (31G) dataset if you need.

bash download_data.sh raw_sun3d_test raw_sun3d_test.tar.gz 0 2
tar -xvf raw_sun3d_test.tar.gz
bash download_data.sh raw_sun3d_train raw_sun3d_train.tar.gz 0 63
tar -xvf raw_sun3d_train.tar.gz

Then generate matches for YFCC100M and SUN3D (only testing). Here we provide scripts for SIFT, this will take a while.

cd dump_match
python extract_feature.py
python yfcc.py
python extract_feature.py --input_path=../raw_data/sun3d_test
python sun3d.py

Generate SUN3D training data if you need by following the same procedure and uncommenting corresponding lines in sun3d.py.

Test pretrained model

We provide the model trained on YFCC100M and SUN3D described in our ICCV paper. Run the test script to get results in our paper.

cd ./core 
python main.py --run_mode=test --model_path=../model/yfcc/essential/sift-2000 --res_path=../model/yfcc/essential/sift-2000/ --use_ransac=False
python main.py --run_mode=test --data_te=../data_dump/sun3d-sift-2000-test.hdf5 --model_path=../model/sun3d/essential/sift-2000 --res_path=../model/sun3d/essential/sift-2000/ --use_ransac=False

Set --use_ransac=True to get results after RANSAC post-processing.

Train model on YFCC100M

After generating dataset for YFCC100M, run the tranining script.

cd ./core 
python main.py

You can train the fundamental estimation model by setting --use_fundamental=True --geo_loss_margin=0.03 and use side information by setting --use_ratio=2 --use_mutual=2

Train with your own local feature or data

The provided models are trained using SIFT. You had better retrain the model if you want to use OANet with your own local feature, such as ContextDesc, SuperPoint and etc.

You can follow the provided example scirpts in ./dump_match to generate dataset for your own local feature or data.

Tips for training OANet: if your dataset is small and overfitting is observed, you can consider replacing the OAFilter with OAFilterBottleneck.

Here we also provide a pretrained essential matrix estimation model using ContextDesc on YFCC100M.

cd model/
wget https://research.altizure.com/data/oanet_data/contextdesc-yfcc.tar.gz
tar -xvf contextdesc-yfcc.tar.gz

To test this model, you need to generate your own data using ContextDesc and then run python main.py --run_mode=test --data_te=YOUR/OWN/CONTEXTDESC/DATA --model_path=../model/yfcc/essential/contextdesc-2000 --res_path=XX --use_ratio=2.

Application on 3D reconstructions

sample

News

  1. Together with the local feature ContextDesc, we won both the stereo and muti-view tracks at the CVPR19 Image Matching Challenge (June. 2, 2019).

  2. We also rank the third place on the Visual Localization Benchmark using ContextDesc (Aug. 30, 2019).

Acknowledgement

This code is heavily borrowed from Learned-Correspondence. If you use the part of code related to data generation, testing and evaluation, you should cite this paper and follow its license.

@inproceedings{yi2018learning,
  title={Learning to Find Good Correspondences},
  author={Kwang Moo Yi* and Eduard Trulls* and Yuki Ono and Vincent Lepetit and Mathieu Salzmann and Pascal Fua},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2018}
}

Changelog

2019.09.29

  • Release code for data generation.

2019.10.04

  • Release model and data for SUN3D.

2019.12.09

  • Release a general purpose model trained on GL3D-v2, which has been tested on FM-Benchmark. This model achieves 66.1/92.3/84.0/47.0 on TUM/KITTI/T&T/CPC respectively using SIFT.
  • Release model trained using ContextDesc.
Owner
Jiahui Zhang
Tsinghua University
Jiahui Zhang
PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)

1-bit Wide ResNet PyTorch implementation of training 1-bit Wide ResNets from this paper: Training wide residual networks for deployment using a single

Sergey Zagoruyko 122 Dec 07, 2022
This is the official code release for the paper Shape and Material Capture at Home

This is the official code release for the paper Shape and Material Capture at Home. The code enables you to reconstruct a 3D mesh and Cook-Torrance BRDF from one or more images captured with a flashl

89 Dec 10, 2022
Code Release for the paper "TriBERT: Full-body Human-centric Audio-visual Representation Learning for Visual Sound Separation"

TriBERT This repository contains the code for the NeurIPS 2021 paper titled "TriBERT: Full-body Human-centric Audio-visual Representation Learning for

UBC Computer Vision Group 8 Aug 31, 2022
Semi Supervised Learning for Medical Image Segmentation, a collection of literature reviews and code implementations.

Semi-supervised-learning-for-medical-image-segmentation. Recently, semi-supervised image segmentation has become a hot topic in medical image computin

Healthcare Intelligence Laboratory 1.3k Jan 03, 2023
PyTorch implementation of Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network

hierarchical-multi-label-text-classification-pytorch Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network Approach This

Mingu Kang 17 Dec 13, 2022
An official reimplementation of the method described in the INTERSPEECH 2021 paper - Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

Facebook Research 253 Jan 06, 2023
Unsupervised clustering of high content screen samples

Microscopium Unsupervised clustering and dataset exploration for high content screens. See microscopium in action Public dataset BBBC021 from the Broa

60 Dec 05, 2022
Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

Object DGCNN & DETR3D This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110

Wang, Yue 539 Jan 07, 2023
👐OpenHands : Making Sign Language Recognition Accessible (WiP 🚧👷‍♂️🏗)

👐 OpenHands: Sign Language Recognition Library Making Sign Language Recognition Accessible Check the documentation on how to use the library: ReadThe

AI4Bhārat 69 Dec 12, 2022
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

VITA 59 Dec 28, 2022
Official Pytorch implementation for AAAI2021 paper (RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning)

RSPNet Official Pytorch implementation for AAAI2021 paper "RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning" [Suppleme

35 Jun 24, 2022
Implementation of the HMAX model of vision in PyTorch

PyTorch implementation of HMAX PyTorch implementation of the HMAX model that closely follows that of the MATLAB implementation of The Laboratory for C

Marijn van Vliet 52 Oct 13, 2022
Scalable Optical Flow-based Image Montaging and Alignment

SOFIMA SOFIMA (Scalable Optical Flow-based Image Montaging and Alignment) is a tool for stitching, aligning and warping large 2d, 3d and 4d microscopy

Google Research 16 Dec 21, 2022
Pytorch implementation of paper: "NeurMiPs: Neural Mixture of Planar Experts for View Synthesis"

NeurMips: Neural Mixture of Planar Experts for View Synthesis This is the official repo for PyTorch implementation of paper "NeurMips: Neural Mixture

James Lin 101 Dec 13, 2022
Extracts data from the database for a graph-node and stores it in parquet files

subgraph-extractor Extracts data from the database for a graph-node and stores it in parquet files Installation For developing, it's recommended to us

Cardstack 0 Jan 10, 2022
PyTorch implementation for STIN

STIN This repository contains PyTorch implementation for STIN. Abstract: In single-photon LiDAR, photon-efficient imaging captures the 3D structure of

Yiweins 2 Nov 22, 2022
Memory-efficient optimum einsum using opt_einsum planning and PyTorch kernels.

opt-einsum-torch There have been many implementations of Einstein's summation. numpy's numpy.einsum is the least efficient one as it only runs in sing

Haoyan Huo 9 Nov 18, 2022
Survival analysis in Python

What is survival analysis and why should I learn it? Survival analysis was originally developed and applied heavily by the actuarial and medical commu

Cameron Davidson-Pilon 2k Jan 08, 2023
Dahua Camera and Doorbell Home Assistant Integration

Home Assistant Dahua Integration The Dahua Home Assistant integration allows you to integrate your Dahua cameras and doorbells in Home Assistant. It's

Ronnie 216 Dec 26, 2022
Learned Token Pruning for Transformers

LTP: Learned Token Pruning for Transformers Check our paper for more details. Installation We follow the same installation procedure as the original H

Sehoon Kim 52 Dec 29, 2022