This is the official code release for the paper Shape and Material Capture at Home

Overview

Shape and Material Capture at Home, CVPR 2021.

Daniel Lichy, Jiaye Wu, Soumyadip Sengupta, David Jacobs

A bare-bones capture setup

Overview

This is the official code release for the paper Shape and Material Capture at Home. The code enables you to reconstruct a 3D mesh and Cook-Torrance BRDF from one or more images captured with a flashlight or camera with flash.

We provide:

  • The trained RecNet model.
  • Code to test on the DiLiGenT dataset.
  • Code to test on our dataset from the paper.
  • Code to test on your own dataset.
  • Code to train a new model, including code for visualization and logging.

Dependencies

This project uses the following dependencies:

  • Python 3.8
  • PyTorch (version = 1.8.1)
  • torchvision
  • numpy
  • scipy
  • opencv
  • OpenEXR (only required for training)

The easiest way to run the code is by creating a virtual environment and installing the dependences with pip e.g.

# Create a new python3.8 environment named py3.8
virtualenv py3.8 -p python3.8

# Activate the created environment
source py3.8/bin/activate

#upgrade pip
pip install --upgrade pip

# To install dependencies 
python -m pip install -r requirements.txt
#or
python -m pip install -r requirements_no_exr.txt

Capturing you own dataset

Multi-image captures

The video below shows how to capture the (up to) six images for you own dataset. Angles are approximate and can be estimated by eye. The camera should be approximately 1 to 4 feet from the object. The flashlight should be far enough from the object such that the entire object is in the illumination cone of the flashlight.

We used this flashlight, but any bright flashlight should work. We used this tripod which comes with a handy remote for iPhone and Android.

Please see the Project Page for a higher resolution version of this video.

Example reconstructions:


Single image captures

Our network also provides state-of-the-art results for reconstructing shape and material from a single flash image.

Examples captured with just an iPhone with flash enabled in a dim room (complete darkness is not needed):


Mask Making

For best performance you should supply a segmentation mask with your image. For our paper we used https://github.com/saic-vul/fbrs_interactive_segmentation which enables mask making with just a few clicks.

Normal prediction results are reasonable without the mask, but integrating normals to a mesh without the mask can be challenging.

Test RecNet on the DiLiGenT dataset

# Download and prepare the DiLiGenT dataset
sh scripts/prepare_diligent_dataset.sh

# Test on 3 DiLiGenT images from the front, front-right, and front-left
# if you only have CPUs remove the --gpu argument
python eval_diligent.py results_path --gpu

# To test on a different subset of DiLiGenT images use the argument --image_nums n1 n2 n3 n4 n5 n6
# where n1 to n6 are the image indices of the right, front-right, front, front-left, left, and above
# images, respectively. For images that are no present set the image number to -1
# e.g to test on only the front image (image number 51) run
python eval_diligent.py results_path --gpu --image_nums -1 -1 51 -1 -1 -1 

Test on our dataset/your own dataset

The easiest way to test on you own dataset and our dataset is to format it as follows:

dataset_dir:

  • sample_name1:
    • 0.ext (right)
    • 1.ext (front-right)
    • 2.ext (front)
    • 3.ext (front-left)
    • 4.ext (left)
    • 5.ext (above)
    • mask.ext
  • sample_name2: (if not all images are present just don't add it to the directory)
    • 2.ext (front)
    • 3.ext (front-left)
  • ...

Where .ext is the image extention e.g. .png, .jpg, .exr

For an example of formating your own dataset please look in data/sample_dataset

Then run:

python eval_standard.py results_path --dataset_root path_to_dataset_dir --gpu

# To test on a sample of our dataset run
python eval_standard.py results_path --dataset_root data/sample_dataset --gpu

Download our real dataset

Coming Soon...

Integrating Normal Maps and Producing a Mesh

We include a script to integrate normals and produce a ply mesh with per vertex albedo and roughness.

After running eval_standard.py or eval_diligent.py there with be a file results_path/images/integration_data.csv Running the following command with produce a ply mesh in results_path/images/sample_name/mesh.ply

python integrate_normals.py results_path/images/integration_data.csv --gpu

This is the most time intensive part of the reconstruction and takes about 3 minutes to run on GPU and 5 minutes on CPU.

Training

To train RecNet from scratch:

python train.py log_dir --dr_dataset_root path_to_dr_dataset --sculpt_dataset_root path_to_sculpture_dataset --gpu

Download the training data

Coming Soon...

FAQ

Q1: What should I do if I have problem running your code?

  • Please create an issue if you encounter errors when trying to run the code. Please also feel free to submit a bug report.

Citation

If you find this code or the provided models useful in your research, please cite it as:

@inproceedings{lichy_2021,
  title={Shape and Material Capture at Home},
  author={Lichy, Daniel and Wu, Jiaye and Sengupta, Soumyadip and Jacobs, David W.},
  booktitle={CVPR},
  year={2021}
}

Acknowledgement

Code used for downloading and loading the DiLiGenT dataset is adapted from https://github.com/guanyingc/SDPS-Net

ColossalAI-Benchmark - Performance benchmarking with ColossalAI

Benchmark for Tuning Accuracy and Efficiency Overview The benchmark includes our

HPC-AI Tech 31 Oct 07, 2022
Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demonstrations

Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demonstrations Trevor Ablett, Daniel (Yifan) Zhai, Jonatha

STARS Laboratory 3 Feb 01, 2022
Put blind watermark into a text with python

text_blind_watermark Put blind watermark into a text. Can be used in Wechat dingding ... How to Use install pip install text_blind_watermark Alice Pu

郭飞 164 Dec 30, 2022
Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.

Self Supervised Learning with Fastai Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks. Install pip install self-

Kerem Turgutlu 276 Dec 23, 2022
A dual benchmarking study of visual forgery and visual forensics techniques

A dual benchmarking study of facial forgery and facial forensics In recent years, visual forgery has reached a level of sophistication that humans can

8 Jul 06, 2022
Watch faces morph into each other with StyleGAN 2, StyleGAN, and DCGAN!

FaceMorpher FaceMorpher is an innovative project to get a unique face morph (or interpolation for geeks) on a website. Yes, this means you can see fac

Anish 9 Jun 24, 2022
Relative Uncertainty Learning for Facial Expression Recognition

Relative Uncertainty Learning for Facial Expression Recognition The official implementation of the following paper at NeurIPS2021: Title: Relative Unc

35 Dec 28, 2022
DTCN IJCAI - Sequential prediction learning framework and algorithm

DTCN This is the implementation of our paper "Sequential Prediction of Social Me

Bobby 2 Jan 24, 2022
A complete speech segmentation system using Kaldi and x-vectors for voice activity detection (VAD) and speaker diarisation.

bbc-speech-segmenter: Voice Activity Detection & Speaker Diarization A complete speech segmentation system using Kaldi and x-vectors for voice activit

BBC 16 Oct 27, 2022
A Pytorch Implementation of ClariNet

ClariNet A Pytorch Implementation of ClariNet (Mel Spectrogram -- Waveform) Requirements PyTorch 0.4.1 & python 3.6 & Librosa Examples Step 1. Downlo

Sungwon Kim 286 Sep 15, 2022
Multi-Task Learning as a Bargaining Game

Nash-MTL Official implementation of "Multi-Task Learning as a Bargaining Game". Setup environment conda create -n nashmtl python=3.9.7 conda activate

Aviv Navon 87 Dec 26, 2022
Sentiment analysis translations of the Bhagavad Gita

Sentiment and Semantic Analysis of Bhagavad Gita Translations It is well known that translations of songs and poems not only breaks rhythm and rhyming

Machine learning and Bayesian inference @ UNSW Sydney 3 Aug 01, 2022
VLG-Net: Video-Language Graph Matching Networks for Video Grounding

VLG-Net: Video-Language Graph Matching Networks for Video Grounding Introduction Official repository for VLG-Net: Video-Language Graph Matching Networ

Mattia Soldan 25 Dec 04, 2022
Bayes-Newton—A Gaussian process library in JAX, with a unifying view of approximate Bayesian inference as variants of Newton's algorithm.

Bayes-Newton Bayes-Newton is a library for approximate inference in Gaussian processes (GPs) in JAX (with objax), built and actively maintained by Wil

AaltoML 165 Nov 27, 2022
Keras community contributions

keras-contrib : Keras community contributions Keras-contrib is deprecated. Use TensorFlow Addons. The future of Keras-contrib: We're migrating to tens

Keras 1.6k Dec 21, 2022
This repository holds the code for the paper "Deep Conditional Gaussian Mixture Model forConstrained Clustering".

Deep Conditional Gaussian Mixture Model for Constrained Clustering. This repository holds the code for the paper Deep Conditional Gaussian Mixture Mod

17 Oct 30, 2022
Codes for building and training the neural network model described in Domain-informed neural networks for interaction localization within astroparticle experiments.

Domain-informed Neural Networks Codes for building and training the neural network model described in Domain-informed neural networks for interaction

DIDACTS 0 Dec 13, 2021
Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Manipulator Learning This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In par

STARS Laboratory 5 Dec 08, 2022
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Dec 31, 2022
Code for a seq2seq architecture with Bahdanau attention designed to map stereotactic EEG data from human brains to spectrograms, using the PyTorch Lightning.

stereoEEG2speech We provide code for a seq2seq architecture with Bahdanau attention designed to map stereotactic EEG data from human brains to spectro

15 Nov 11, 2022