MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

Related tags

Deep LearningMonoRec
Overview

MonoRec

Paper | Video (CVPR) | Video (Reconstruction) | Project Page

This repository is the official implementation of the paper:

MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

Felix Wimbauer*, Nan Yang*, Lukas Von Stumberg, Niclas Zeller and Daniel Cremers

CVPR 2021 (arXiv)

If you find our work useful, please consider citing our paper:

@InProceedings{wimbauer2020monorec,
  title = {{MonoRec}: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera},
  author = {Wimbauer, Felix and Yang, Nan and von Stumberg, Lukas and Zeller, Niclas and Cremers, Daniel},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2021},
}

🏗️ ️ Setup

The conda environment for this project can be setup by running the following command:

conda env create -f environment.yml

🏃 Running the Example Script

We provide a sample from the KITTI Odometry test set and a script to run MonoRec on it in example/. To download the pretrained model and put it into the right place, run download_model.sh. You can manually do this by can by downloading the weights from here and unpacking the file to saved/checkpoints/monorec_depth_ref.pth. The example script will plot the keyframe, depth prediction and mask prediction.

cd example
python test_monorec.py

🗃️ Data

In all of our experiments we used the KITTI Odometry dataset for training. For additional evaluations, we used the KITTI, Oxford RobotCar, TUM Mono-VO and TUM RGB-D datasets. All datapaths can be specified in the respective configuration files. In our experiments, we put all datasets into a seperate folder ../data.

KITTI Odometry

To setup KITTI Odometry, download the color images and calibration files from the official website (around 145 GB). Instead of the given velodyne laser data files, we use the improved ground truth depth for evaluation, which can be downloaded from here.

Unzip the color images and calibration files into ../data. The lidar depth maps can be extracted into the given folder structure by running data_loader/scripts/preprocess_kitti_extract_annotated_depth.py.

For training and evaluation, we use the poses estimated by Deep Virtual Stereo Odometry (DVSO). They can be downloaded from here and should be placed under ../data/{kitti_path}/poses_dso. This folder structure is ensured when unpacking the zip file in the {kitti_path} directory.

The auxiliary moving object masks can be downloaded from here. They should be placed under ../data/{kitti_path}/sequences/{seq_num}/mvobj_mask. This folder structure is ensured when unpacking the zip file in the {kitti_path} directory.

Oxford RobotCar

To setup Oxford RobotCar, download the camera model files and the large sample from the official website. Code, as well as, camera extrinsics need to be downloaded from the official GitHub repository. Please move the content of the python folder to data_loaders/oxford_robotcar/. extrinsics/, models/ and sample/ need to be moved to ../data/oxford_robotcar/. Note that for poses we use the official visual odometry poses, which are not provided in the large sample. They need to be downloaded manually from the raw dataset and unpacked into the sample folder.

TUM Mono-VO

Unfortunately, TUM Mono-VO images are provided only in the original, distorted form. Therefore, they need to be undistorted first before fed into MonoRec. To obtain poses for the sequences, we run the publicly available version of Direct Sparse Odometry.

TUM RGB-D

The official sequences can be downloaded from the official website and need to be unpacked under ../data/tumrgbd/{sequence_name}. Note that our provided dataset implementation assumes intrinsics from fr3 sequences. Note that the data loader for this dataset also relies on the code from the Oxford Robotcar dataset.

🏋️ Training & Evaluation

Please stay tuned! Training code will be published soon!

We provide checkpoints for each training stage:

Training stage Download
Depth Bootstrap Link
Mask Bootstrap Link
Mask Refinement Link
Depth Refinement (final model) Link

Run download_model.sh to download the final model. It will automatically get moved to saved/checkpoints.

To reproduce the evaluation results on different datasets, run the following commands:

python evaluate.py --config configs/evaluate/eval_monorec.json        # KITTI Odometry
python evaluate.py --config configs/evaluate/eval_monorec_oxrc.json   # Oxford Robotcar

☁️ Pointclouds

To reproduce the pointclouds depicted in the paper and video, use the following commands:

python create_pointcloud.py --config configs/test/pointcloud_monorec.json       # KITTI Odometry
python create_pointcloud.py --config configs/test/pointcloud_monorec_oxrc.json  # Oxford Robotcar
python create_pointcloud.py --config configs/test/pointcloud_monorec_tmvo.json  # TUM Mono-VO
Owner
Felix Wimbauer
M.Sc. Computer Science, Oxford, TUM, NUS
Felix Wimbauer
CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices.

CenterFace Introduce CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices. Recent Update 2019.09.

StarClouds 1.2k Dec 21, 2022
Picasso: A CUDA-based Library for Deep Learning over 3D Meshes

The Picasso Library is intended for complex real-world applications with large-scale surfaces, while it also performs impressively on the small-scale applications over synthetic shape manifolds. We h

97 Dec 01, 2022
Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs, ICCV 2021

Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs, ICCV 2021 Global Pooling, More than Meets the Eye: Posi

Md Amirul Islam 32 Apr 24, 2022
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition This repository contains code for the CVPR2021 paper "Patch-NetV

QVPR 368 Jan 06, 2023
Code for AA-RMVSNet: Adaptive Aggregation Recurrent Multi-view Stereo Network (ICCV 2021).

AA-RMVSNet Code for AA-RMVSNet: Adaptive Aggregation Recurrent Multi-view Stereo Network (ICCV 2021) in PyTorch. paper link: arXiv | CVF Change Log Ju

Qingtian Zhu 97 Dec 30, 2022
Aerial Imagery dataset for fire detection: classification and segmentation (Unmanned Aerial Vehicle (UAV))

Aerial Imagery dataset for fire detection: classification and segmentation using Unmanned Aerial Vehicle (UAV) Title FLAME (Fire Luminosity Airborne-b

79 Jan 06, 2023
TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks [Paper] [Project Website] This repository holds the source code, pretra

Humam Alwassel 83 Dec 21, 2022
g9.py - Torch interactive graphics

g9.py - Torch interactive graphics A Torch toy in the browser. Demo at https://srush.github.io/g9py/ This is a shameless copy of g9.js, written in Pyt

Sasha Rush 13 Nov 16, 2022
GAN-based 3D human pose estimation model for 3DV'17 paper

Tensorflow implementation for 3DV 2017 conference paper "Adversarially Parameterized Optimization for 3D Human Pose Estimation". @inproceedings{jack20

Dominic Jack 15 Feb 27, 2021
The repository contains source code and models to use PixelNet architecture used for various pixel-level tasks. More details can be accessed at .

PixelNet: Representation of the pixels, by the pixels, and for the pixels. We explore design principles for general pixel-level prediction problems, f

Aayush Bansal 196 Aug 10, 2022
SporeAgent: Reinforced Scene-level Plausibility for Object Pose Refinement

SporeAgent: Reinforced Scene-level Plausibility for Object Pose Refinement This repository implements the approach described in SporeAgent: Reinforced

Dominik Bauer 5 Jan 02, 2023
Multiple custom object count and detection using YOLOv3-Tiny method

Electronic-Component-YOLOv3 Introduce This project created to detect, count, and recognize multiple custom object using YOLOv3-Tiny method. The target

Derwin Mahardika 2 Nov 14, 2022
A Pytorch implementation of "Splitter: Learning Node Representations that Capture Multiple Social Contexts" (WWW 2019).

Splitter ⠀⠀ A PyTorch implementation of Splitter: Learning Node Representations that Capture Multiple Social Contexts (WWW 2019). Abstract Recent inte

Benedek Rozemberczki 201 Nov 09, 2022
Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness

Orthogonalizing Convolutional Layers with the Cayley Transform This repository contains implementations and source code to reproduce experiments for t

CMU Locus Lab 36 Dec 30, 2022
Plugin adapted from Ultralytics to bring YOLOv5 into Napari

napari-yolov5 Plugin adapted from Ultralytics to bring YOLOv5 into Napari. Training and detection can be done using the GUI. Training dataset must be

2 May 05, 2022
Algo-burn - Script to configure an Algorand address as a "burn" address for one or more ASA tokens

Algorand Burn Address This is a simple script to illustrate how a "burn address"

GSD 5 May 10, 2022
Automatic Image Background Subtraction

Automatic Image Background Subtraction This repo contains set of scripts for automatic one-shot image background subtraction task using the following

Oleg Sémery 6 Dec 05, 2022
Official PyTorch code for the paper: "Point-Based Modeling of Human Clothing" (ICCV 2021)

Point-Based Modeling of Human Clothing Paper | Project page | Video This is an official PyTorch code repository of the paper "Point-Based Modeling of

Visual Understanding Lab @ Samsung AI Center Moscow 64 Nov 22, 2022
PyTorch implementation of image classification models for CIFAR-10/CIFAR-100/MNIST/FashionMNIST/Kuzushiji-MNIST/ImageNet

PyTorch Image Classification Following papers are implemented using PyTorch. ResNet (1512.03385) ResNet-preact (1603.05027) WRN (1605.07146) DenseNet

1.2k Jan 04, 2023
Backdoor Attack through Frequency Domain

Backdoor Attack through Frequency Domain DEPENDENCIES python==3.8.3 numpy==1.19.4 tensorflow==2.4.0 opencv==4.5.1 idx2numpy==1.2.3 pytorch==1.7.0 Data

5 Jun 18, 2022