Code release of paper "Deep Multi-View Stereo gone wild"

Overview

Deep MVS gone wild

Pytorch implementation of "Deep MVS gone wild" (Paper | website)

This repository provides the code to reproduce the experiments of the paper. It implements extensive comparison of Deep MVS architecture, training data and supervision.

If you find this repository useful for your research, please consider citing

@article{
  author    = {Darmon, Fran{\c{c}}ois  and
               Bascle, B{\'{e}}n{\'{e}}dicte  and
               Devaux, Jean{-}Cl{\'{e}}ment  and
               Monasse, Pascal  and
               Aubry, Mathieu},
  title     = {Deep Multi-View Stereo gone wild},
  year      = {2021},
  url       = {https://arxiv.org/abs/2104.15119},
}

Installation

  • Python packages: see requirements.txt

  • Fusibile:

git clone https://github.com/YoYo000/fusibile 
cd fusibile
cmake .
make .
ln -s EXE ./fusibile
  • COLMAP: see the github repository for installation details then link colmap executable with ln -s COLMAP_DIR/build/src/exe/colmap colmap

Training

You may find all the pretrained models here (120 Mo) or alternatively you can train models using the following instructions.

Data

Download the following data and extract to folder datasets

The directory structure should be as follow:

datasets
├─ blended
├─ dtu_train
├─ MegaDepth_v1
├─ undistorted_md_geometry

The data is already preprocessed for DTU and BlendedMVS. For MegaDepth, run python preprocess.py for generating the training data.

Script

The training script is train.py, launch python train.py --help for all the options. For example

  • python train.py --architecture vis_mvsnet --dataset md --supervised --logdir best_sup --world_size 4 --batch_size 4 for training the best performing setup for images in the wild.
  • python train.py --architecture mvsnet-s --dataset md --unsupervised --upsample --occ_masking --epochs 5 --lrepochs 4:10 --logdir best_unsup --world_size 3 for the best unsupervised model.

The models are saved in folder trained_models

Evaluations

We provide code for both depthmap evaluation and 3D reconstruction evaluation

Data

Download the following links and extract them to datasets

  • BlendedMVS (27.5 GB) same link as BlendedMVS training data

  • YFCC depth maps (1.1Go)

  • DTU MVS benchmark: Create directory datasets/dtu_eval and extract the following files

    In the end the folder structure should be

    datasets
    ├─ dtu_eval
        ├─ ObsMask
        ├─ images
        ├─ Points
            ├─ stl
    
  • YFCC 3D reconstruction (1.5Go)

Depthmap evaluation

python depthmap_eval.py --model MODEL --dataset DATA

  • MODEL is the name of a folder found in trained_models
  • DATA is the evaluation dataset, either yfcc or blended

3D reconstruction

See python reconstruction_pipeline.py --help for a complete list of parameters for 3D reconstruction. For running the whole evaluation for a trained model with the parameters used in the paper, run

  • scripts/eval3d_dtu.sh --model MODEL (--compute_metrics) for DTU evaluation
  • scripts/eval3d_yfcc.sh --model MODEL (--compute_metrics) for YFCC 3D evaluation

The reconstruction will be located in datasets/dtu_eval/Points or datasets/yfcc_data/Points

Acknowledgments

This repository is inspired by MVSNet_pytorch and MVSNet repositories. We also adapt the official implementations of Vis_MVSNet and CVP_MVSNet.

Copyright

Deep MVS Gone Wild All rights reseved to Thales LAS and ENPC.

This code is freely available for academic use only and Provided “as is” without any warranty.

Modification are allowed for academic research provided that the following conditions are met :
  * Redistributions of source code or any format must retain the above copyright notice and this list of conditions.
  * Neither the name of Thales LAS and ENPC nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
Owner
François Darmon
PhD student in 3D computer vision at Imagine team ENPC and Thales LAS FRANCE
François Darmon
Unity Propagation in Bayesian Networks Handling Inconsistency via Unity Smoothing

This repository contains the scripts needed to generate the results from the paper Unity Propagation in Bayesian Networks Handling Inconsistency via U

0 Jan 19, 2022
PyTorch implementation of DUL (Data Uncertainty Learning in Face Recognition, CVPR2020)

PyTorch implementation of DUL (Data Uncertainty Learning in Face Recognition, CVPR2020)

Mouxiao Huang 20 Nov 15, 2022
GANmouflage: 3D Object Nondetection with Texture Fields

GANmouflage: 3D Object Nondetection with Texture Fields Rui Guo1 Jasmine Collins

29 Aug 10, 2022
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machen 11 Nov 27, 2022
Deep Markov Factor Analysis (NeurIPS2021)

Deep Markov Factor Analysis (DMFA) Codes and experiments for deep Markov factor analysis (DMFA) model accepted for publication at NeurIPS2021: A. Farn

Sarah Ostadabbas 2 Dec 16, 2022
The codes I made while I practiced various TensorFlow examples

TensorFlow_Exercises The codes I made while I practiced various TensorFlow examples About the codes I didn't create these codes by myself, but re-crea

Terry Taewoong Um 614 Dec 08, 2022
Advanced yabai wooting scripts

Yabai Wooting scripts Installation requirements Both https://github.com/xiamaz/python-yabai-client and https://github.com/xiamaz/python-wooting-rgb ne

Max Zhao 3 Dec 31, 2021
Joint Detection and Identification Feature Learning for Person Search

Person Search Project This repository hosts the code for our paper Joint Detection and Identification Feature Learning for Person Search. The code is

712 Dec 17, 2022
Animation of solving the traveling salesman problem to optimality using mixed-integer programming and iteratively eliminating sub tours

tsp-streamlit Animation of solving the traveling salesman problem to optimality using mixed-integer programming and iteratively eliminating sub tours.

4 Nov 05, 2022
Malware Env for OpenAI Gym

Malware Env for OpenAI Gym Citing If you use this code in a publication please cite the following paper: Hyrum S. Anderson, Anant Kharkar, Bobby Fila

ENDGAME 563 Dec 29, 2022
List of awesome things around semantic segmentation 🎉

Awesome Semantic Segmentation List of awesome things around semantic segmentation 🎉 Semantic segmentation is a computer vision task in which we label

Dam Minh Tien 18 Nov 26, 2022
Extracting and filtering paraphrases by bridging natural language inference and paraphrasing

nli2paraphrases Source code repository accompanying the preprint Extracting and filtering paraphrases by bridging natural language inference and parap

Matej Klemen 1 Mar 09, 2022
A configurable, tunable, and reproducible library for CTR prediction

FuxiCTR This repo is the community dev version of the official release at huawei-noah/benchmark/FuxiCTR. Click-through rate (CTR) prediction is an cri

XUEPAI 397 Dec 30, 2022
Code for the paper "Location-aware Single Image Reflection Removal"

Location-aware Single Image Reflection Removal The shown images are provided by the datasets from IBCLN, ERRNet, SIR2 and the Internet images. The cod

72 Dec 08, 2022
Kindle is an easy model build package for PyTorch.

Kindle is an easy model build package for PyTorch. Building a deep learning model became so simple that almost all model can be made by copy and paste from other existing model codes. So why code? wh

Jongkuk Lim 77 Nov 11, 2022
A Pytorch implementation of the multi agent deep deterministic policy gradients (MADDPG) algorithm

Multi-Agent-Deep-Deterministic-Policy-Gradients A Pytorch implementation of the multi agent deep deterministic policy gradients(MADDPG) algorithm This

Phil Tabor 159 Dec 28, 2022
QueryDet: Cascaded Sparse Query for Accelerating High-Resolution SmallObject Detection

QueryDet-PyTorch This repository is the official implementation of our paper: QueryDet: Cascaded Sparse Query for Accelerating High-Resolution Small O

Chenhongyi Yang 276 Dec 31, 2022
NeurIPS 2021 paper 'Representation Learning on Spatial Networks' code

Representation Learning on Spatial Networks This repository is the official implementation of Representation Learning on Spatial Networks. Training Ex

13 Dec 29, 2022
The challenge for Quantum Coalition Hackathon 2021

Qchack 2021 Google Challenge This is a challenge for the brave 2021 qchack.io participants. Instructions Hello, intrepid qchacker, welcome to the G|o

quantumlib 18 May 04, 2022
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 66 Nov 28, 2022