DFM: A Performance Baseline for Deep Feature Matching

Related tags

Deep LearningDFM
Overview

DFM: A Performance Baseline for Deep Feature Matching

Python (Pytorch) and Matlab (MatConvNet) implementations of our paper DFM: A Performance Baseline for Deep Feature Matching at CVPR 2021 Image Matching Workshop.

Paper (CVF) | Paper (arXiv)
Presentation (live) | Presentation (recording)

Overview

Setup Environment

We strongly recommend using Anaconda. Open a terminal in ./python folder, and simply run the following lines to create the environment:

conda env create -f environment.yml
conda activte dfm

Dependencies
If you do not use conda, DFM needs the following dependencies:
(Versions are not strict; however, we have tried DFM with these specific versions.)

  • python=3.7.1
  • pytorch=1.7.1
  • torchvision=0.8.2
  • cudatoolkit=11.0
  • matplotlib=3.3.4
  • pillow=8.2.0
  • opencv=3.4.2
  • ipykernel=5.3.4
  • pyyaml=5.4.1

Enjoy with DFM!

Now you are ready to test DFM by the following command:

python dfm.py --input_pairs image_pairs.txt

You should make the image_pairs.txt file as following:

1A> 1B>
2A> 2B>
.
.
.
nA> nB>

If you want to run DFM with a specific configuration, you can make changes to the following arguments in config.yml:

  • Use enable_two_stage to enable or disable two stage approach (default: True)
    (Note: Make it enable for planar scenes with significant viewpoint changes, otherwise disable.)
  • Use model to change the pre-trained model (default: VGG19)
    (Note: DFM only supports VGG19 and VGG19_BN right now, we plan to add other backbones.)
  • Use ratio_th to change ratio test thresholds (default: [0.9, 0.9, 0.9, 0.9, 0.95, 1.0])
    (Note: These ratio test thresholds are for 1st to 5th layer, the last threshold (6th) are for Stage-0 and only usable when --enable_two_stage=True)
  • Use bidirectional to enable or disable bidirectional ratio test. (default: True)
    (Note: Make it enable to find more robust matches. Naturally, it should be enabled, make it False is only for similar results with our Matlab implementation since Matlab's matchFeatures function does not execute ratio test in a bidirectional way.)
  • Use display_results to enable or disable displaying results (default: True)
    (Note: If True, DFM saves matched image pairs to output_directory.)
  • Use output_directory to define output directory. (default: 'results')
    (Note: imageA_imageB_matches.npz will be created in output_directory for each image pair.)

Evaluation

Currently, we do not have support evaluation for our Python implementation. You can use our Image Matching Evaluation repository (coming soon), in which we have support to evaluate SuperPoint, SuperGlue, Patch2Pix, and DFM algorithms on HPatches. Also, you can use our Matlab implementation (see For Matlab Users section) to reproduce the results presented in the paper.

Notice

To reproduce our results given in the paper, use our Matlab implementation.
You can get more accurate results (but with fewer features) using Python implementation. It is mainly because MATLAB’s matchFeatures function does not execute ratio test in a bidirectional way, where our Python implementation performs bidirectional ratio test. Nevertheless, we made bidirectionality adjustable in our Python implementation as well.

For Matlab Users

We have implemented and tested DFM on MATLAB R2017b.

Prerequisites

You need to install MatConvNet (we have support for matconvnet-1.0-beta24). Follow the instructions on the official website.

Once you finished the installation of MatConvNet, you should download pretratined VGG-19 network to the ./matlab/models folder.

Running DFM

Now, you are ready to try DFM!

Just open and run main_DFM.m with your own images.

Evaluation on HPatches

Download HPatches sequences and extract it to ./matlab/data folder.

Run main_hpatches.m which is in ./matlab/HPatches Evaluation folder.

A results.txt file will be generetad in ./matlab/results/HPatches folder.

  • In the first column you can find the pair names.
  • In the 2-11 column you can find the Mean Matching Accuracy (MMA) results for 1-10 pixel thresholds.
  • In 12th column you can find number of matched features.
  • Columns 13-17 are for best homography estimation results (denoted as boe in the paper)
  • Columns 18-22 are for worst homography estimation results (denoted as woe in the paper)
  • Columns 22-71 are for 10 different homography estimation tests.

BibTeX Citation

Please cite our paper if you use the code:

@InProceedings{Efe_2021_CVPR,
    author    = {Efe, Ufuk and Ince, Kutalmis Gokalp and Alatan, Aydin},
    title     = {DFM: A Performance Baseline for Deep Feature Matching},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2021},
    pages     = {4284-4293}
}
Owner
MSc student @ METU
For holding anime-related object classification and detection models

Animesion An end-to-end framework for anime-related object classification, detection, segmentation, and other models. Update: 01/22/2020. Due to time-

Edwin Arkel Rios 72 Nov 30, 2022
Code accompanying the NeurIPS 2021 paper "Generating High-Quality Explanations for Navigation in Partially-Revealed Environments"

Generating High-Quality Explanations for Navigation in Partially-Revealed Environments This work presents an approach to explainable navigation under

RAIL Group @ George Mason University 1 Oct 28, 2022
SAMO: Streaming Architecture Mapping Optimisation

SAMO: Streaming Architecture Mapping Optimiser The SAMO framework provides a method of optimising the mapping of a Convolutional Neural Network model

Alexander Montgomerie-Corcoran 20 Dec 10, 2022
Using a Seq2Seq RNN architecture via TensorFlow to predict future Bitcoin prices

Recurrent Bitcoin Network A Data Science Thesis Project About This repository contains the source code for implementing Bitcoin price prediciton using

Frizu 6 Sep 08, 2022
Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity

Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity, such as gratings, photonic-crystal slabs, metasurfaces, surf

Alex Song 17 Dec 19, 2022
The fastest way to visualize GradCAM with your Keras models.

VizGradCAM VizGradCam is the fastest way to visualize GradCAM in Keras models. GradCAM helps with providing visual explainability of trained models an

58 Nov 19, 2022
Network Enhancement implementation in pytorch

network_enahncement_pytorch Network Enhancement implementation in pytorch Research paper Network Enhancement: a general method to denoise weighted bio

Yen 1 Nov 12, 2021
Pytorch code for "Text-Independent Speaker Verification Using 3D Convolutional Neural Networks".

:speaker: Deep Learning & 3D Convolutional Neural Networks for Speaker Verification

Amirsina Torfi 114 Dec 18, 2022
Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

tonne 1.4k Dec 29, 2022
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

34 Oct 08, 2022
[NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

Efficient Graph Similarity Computation - (EGSC) This repo contains the source code and dataset for our paper: Slow Learning and Fast Inference: Effici

23 Nov 11, 2022
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machen 11 Nov 27, 2022
Source code for "OmniPhotos: Casual 360° VR Photography"

OmniPhotos: Casual 360° VR Photography Project Page | Video | Paper | Demo | Data This repository contains the source code for creating and viewing Om

Christian Richardt 144 Dec 30, 2022
Official code for "Towards An End-to-End Framework for Flow-Guided Video Inpainting" (CVPR2022)

E2FGVI (CVPR 2022) English | 简体中文 This repository contains the official implementation of the following paper: Towards An End-to-End Framework for Flo

Media Computing Group @ Nankai University 537 Jan 07, 2023
CellRank's reproducibility repository.

CellRank's reproducibility repository We believe that reproducibility is key and have made it as simple as possible to reproduce our results. Please e

Theis Lab 8 Oct 08, 2022
Implementation for Learning to Track with Object Permanence

Learning to Track with Object Permanence A video-based MOT approach capable of tracking through full occlusions: Learning to Track with Object Permane

Toyota Research Institute - Machine Learning 91 Jan 03, 2023
A curated list of awesome resources combining Transformers with Neural Architecture Search

A curated list of awesome resources combining Transformers with Neural Architecture Search

Yash Mehta 173 Jan 03, 2023
SAS output to EXCEL converter for Cornell/MIT Language and acquisition lab

CORNELLSASLAB SAS output to EXCEL converter for Cornell/MIT Language and acquisition lab Instructions: This python code can be used to convert SAS out

2 Jan 26, 2022
Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation Paper Multi-Target Adversarial Frameworks for Domain Adaptation in

Valeo.ai 20 Jun 21, 2022
A rule learning algorithm for the deduction of syndrome definitions from time series data.

README This project provides a rule learning algorithm for the deduction of syndrome definitions from time series data. Large parts of the algorithm a

0 Sep 24, 2021