Implementation of the ICCV'21 paper Temporally-Coherent Surface Reconstruction via Metric-Consistent Atlases

Overview

Temporally-Coherent Surface Reconstruction via Metric-Consistent Atlases [Papers 1, 2][Project page] [Video]

The implementation of the papers

Install

The framework was tested with Python 3.8, PyTorch 1.7.0. and CUDA 11.0. The easiest way to work with the code is to create a new virtual Python environment and install the required packages.

  1. Install the virtualenvwrapper.
  2. Create a new environment and install the required packages.
mkvirtualenv --python=python3.8 tcsr
pip install -r requirements.txt
  1. Install Pytorch3d.
cd ~
curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
tar xzf 1.10.0.tar.gz
export CUB_HOME=$PWD/cub-1.10.0
pip install git+https://github.com/facebookresearch/[email protected]
  1. Get the code and prepare the environment as follows:
git clone [email protected]:bednarikjan/temporally_coherent_surface_reconstruction.git
git submodule update --init --recursive
export PYTHONPATH="{PYTHONPATH}:path/to/dir/temporally_coherent_surface_reconstruction"

Get the Data

The project was tested on 6 base datasets (and their derivatives). Each datasets has to be processed so as to generate the input point clouds for training, the GT correspondences for evauluation and other auxilliary data. To do so, please use the individual scripts in tcsr/process_datasets. For each dataset, follow these steps:

  1. Download the data (links below).
  2. Open the script <dataset_name>.py and set the input/output paths.
  3. Run the script: python <dataset_name>.py

1. ANIM

  • Download the sequences horse gallop, horse collapse, camel gallop, camel collapse, and elephant gallop.
  • Download the sequence walking cat.

2. AMA

  • Download all 10 sequences, meshes only.

3. DFAUST

4. CAPE

  • Request the access to the raw scans and download it.
  • At the time of writing the paper (September 2021) four subjects (00032, 00096, 00159, 03223) were available and used in the paper.

5. INRIA

  • Request the access to the dataset and download it.
  • At the time of writing the paper (September 2021), four subjects (s1, s2, s3, s6) were available and used in the paper.

6. CMU

Train

The provided code allows for training our proposed method (OUR) but also the other atlas based approaches Differential Surface Representation (DSR) and AtlasNet (AN). The training is configured using the *.yaml configuration scripts in tcsr/train/configs.

There are 9 sample configuration files our_<dataset_name>.yaml which train OUR on each individual dataset and 2 sample configuration files an_anim.yaml, dsr_anim.yaml which train AN and DSR respectivelly on ANIM dataset.

By default, the trainin uses the exact settings as in the paper, namely it trains for 200'000 iterations using SGD, learning rate of 0.001 and batch size of 4. This can be altered in the configuration files.

Before starting the training, follow these steps:

  • Open the source file tcsr/data/data_loader.py and set the paths to the datasets in each dataset class.
  • Open the desired training configuration *.yaml file in tcsr/train/configs/ and set the output path for the training run data in the attribute path_train_run.

Start the training usint the script tcsr/train/train.py:

python train.py --conf configs/<file_name>.yaml

By default the script saves the training progress each 2000 iterations so you can safely kill it at any point and resume the trianing later using:

python train.py --cont path/to/training_run/root_dir

Evaluate

To evaluate a trianed model on the dense correspondence prediction task, use the script tcsr/evaluate/eval_dataset.py which allows for evaluation of multiple sequences (i.e. individual training runs within one dataset) at once. Please have a look at the command line arguments in the file.

An example of how to run the evaluation for the training runs contained in the root directory train_runs_root corresponding to 2 training runs run for the sequences cat_walk and horse_gallop within ANIM dataset:

python eval_dataset.py /path/to/train_runs_root --ds anim --include_seqs cat_walk horse_gallop  

The script produces a *.csv file in train_runs_root with the 4 measured metrics (see the paper).

Visualize

There are currently two ways to visualize the predictions.

1. Tensorboard

By default, the training script saves the GT and the predicted point clouds (for a couple of random data samples) each 2000 iterations. These can be viewed within Tensorboard. Each patch is visualized with a different color. This visualization is mostly useful as a sanity check during the trianing to see that the model is converging as expected.

  • Navigate to the root directory of the trianing runs and run:
tensorboard --logdir=. --port=8008 --bind_all
  • Open your browser and navigate to http://localhost:8008/

2. Per-sequence reconstruction GIF

You can view the reconstructed surfaces as a patch-wise textured mesh as a video within a GIF file. For this purpose, use the IPython Notebook file tcsr/visualize/render_uv.ipynb and open it in jupyterlab which allows for viewing the GIF right after running the code.

The rendering parameters (such as the camera location, texturing mode, gif speed etc.) are set usin the configuration file tcsr/visualize/conf_patches.yaml. There are sample configurations for the sequence cat_walk, which can be used to write configurations for other sequences/datasets.

Before running the cells, set the variables in the second cell (paths, models, data).

Citation

@inproceedings{bednarik2021temporally_coherent,
   title = {Temporally-Coherent Surface Reconstruction via Metric-Consistent Atlases},
   author = {Bednarik, Jan and Kim, Vladimir G. and Chaudhuri, Siddhartha and Parashar, Shaifali and Salzmann, Mathieu and Fua, Pascal and Aigerman, Noam},
   booktitle = {Proceedings of IEEE International Conference on Computer Vision (ICCV)},
   year = {2021}
}

@inproceedings{bednarik2021temporally_consistent,
   title = {Temporally-Consistent Surface Reconstruction via Metrically-Consistent Atlases},
   author = {Bednarik, Jan and Aigerman, Noam and Kim, Vladimir G. and Chaudhuri, Siddhartha and Parashar, Shaifali and Salzmann, Mathieu and Fua, Pascal},
   booktitle = {arXiv},
   year = {2021}
}

Acknowledgements

This work was partially done while the main author was an intern at Adobe Research.

TODO

  • Add support for visualizing the correspondence error heatmap on the GT mesh.
  • Add support for visualizing the colorcoded correspondences on the GT mesh.
  • Add the support for generating the pre-aligned AMAa dataset using ICP.
  • Add the code for the nonrigid ICP experiments.
Easy-to-use micro-wrappers for Gym and PettingZoo based RL Environments

SuperSuit introduces a collection of small functions which can wrap reinforcement learning environments to do preprocessing ('microwrappers'). We supp

Farama Foundation 357 Jan 06, 2023
Detecting Blurred Ground-based Sky/Cloud Images

Detecting Blurred Ground-based Sky/Cloud Images With the spirit of reproducible research, this repository contains all the codes required to produce t

1 Oct 20, 2021
PyTorch implementation of DreamerV2 model-based RL algorithm

PyDreamer Reimplementation of DreamerV2 model-based RL algorithm in PyTorch. The official DreamerV2 implementation can be found here. Features ... Run

118 Dec 15, 2022
CLIP (Contrastive Language–Image Pre-training) trained on Indonesian data

CLIP-Indonesian CLIP (Radford et al., 2021) is a multimodal model that can connect images and text by training a vision encoder and a text encoder joi

Galuh 17 Mar 10, 2022
Framework for abstracting Amiga debuggers and access to AmigaOS libraries and devices.

Framework for abstracting Amiga debuggers. This project provides abstration to control an Amiga remotely using a debugger. The APIs are not yet stable

Roc Vallès 39 Nov 22, 2022
Code for weakly supervised segmentation of a single class

SingleClassRL Implementation of weak single object segmentation from paper "Regularized Loss for Weakly Supervised Single Class Semantic Segmentation"

16 Nov 14, 2022
Official PyTorch Implementation of paper EAN: Event Adaptive Network for Efficient Action Recognition

Official PyTorch Implementation of paper EAN: Event Adaptive Network for Efficient Action Recognition

TianYuan 27 Nov 07, 2022
Machine Learning Framework for Operating Systems - Brings ML to Linux kernel

KML: A Machine Learning Framework for Operating Systems & Storage Systems Storage systems and their OS components are designed to accommodate a wide v

File systems and Storage Lab (FSL) 186 Nov 24, 2022
Iterative Training: Finding Binary Weight Deep Neural Networks with Layer Binarization

Iterative Training: Finding Binary Weight Deep Neural Networks with Layer Binarization This repository contains the source code for the paper (link wi

Rakuten Group, Inc. 0 Nov 19, 2021
A working implementation of the Categorical DQN (Distributional RL).

Categorical DQN. Implementation of the Categorical DQN as described in A distributional Perspective on Reinforcement Learning. Thanks to @tudor-berari

Florin Gogianu 98 Sep 20, 2022
*ObjDetApp* deploys a pytorch model for object detection

*ObjDetApp* deploys a pytorch model for object detection

Will Chao 1 Dec 26, 2021
H&M Fashion Image similarity search with Weaviate and DocArray

H&M Fashion Image similarity search with Weaviate and DocArray This example shows how to do image similarity search using DocArray and Weaviate as Doc

Laura Ham 18 Aug 11, 2022
Several simple examples for popular neural network toolkits calling custom CUDA operators.

Neural Network CUDA Example Several simple examples for neural network toolkits (PyTorch, TensorFlow, etc.) calling custom CUDA operators. We provide

WeiYang 798 Jan 01, 2023
Predictive AI layer for existing databases.

MindsDB is an open-source AI layer for existing databases that allows you to effortlessly develop, train and deploy state-of-the-art machine learning

MindsDB Inc 12.2k Jan 03, 2023
The Few-Shot Bot: Prompt-Based Learning for Dialogue Systems

Few-Shot Bot: Prompt-Based Learning for Dialogue Systems This repository includes the dataset, experiments results, and code for the paper: Few-Shot B

Andrea Madotto 103 Dec 28, 2022
Unofficial Tensorflow-Keras implementation of Fastformer based on paper [Fastformer: Additive Attention Can Be All You Need](https://arxiv.org/abs/2108.09084).

Fastformer-Keras Unofficial Tensorflow-Keras implementation of Fastformer based on paper Fastformer: Additive Attention Can Be All You Need. Tensorflo

Yam Peleg 10 Jan 30, 2022
Yolov5-lite - Minimal PyTorch implementation of YOLOv5

Yolov5-Lite: Minimal YOLOv5 + Deep Sort Overview This repo is a shortened versio

Kadir Nar 57 Nov 28, 2022
Python library for loading and using triangular meshes.

Trimesh is a pure Python (2.7-3.4+) library for loading and using triangular meshes with an emphasis on watertight surfaces. The goal of the library i

Michael Dawson-Haggerty 2.2k Jan 07, 2023
This is the code repository for the paper "Identification of the Generalized Condorcet Winner in Multi-dueling Bandits" (NeurIPS 2021).

Code Repository for the Paper "Identification of the Generalized Condorcet Winner in Multi-dueling Bandits" (To appear in: Proceedings of NeurIPS20

1 Oct 03, 2022
TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Prediction.

TalkNet 2 [WIP] TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Predictio

Rishikesh (ऋषिकेश) 69 Dec 17, 2022