PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

Overview

PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

The implementation is based on SIGGRAPH Aisa'20.

Dependencies

  • Python 3.7
  • Ubuntu 18.04 (The system should run on other Ubuntu versions and Windows, however not tested.)
  • RBDL: Rigid Body Dynamics Library (https://rbdl.github.io/)
  • PyTorch 1.8.1 with GPU support (cuda 10.2 is tested to work)
  • For other python packages, please check requirements.txt

Installation

  • Download and install Python binded RBDL from https://github.com/rbdl/rbdl

  • Install Pytorch 1.8.1 with GPU support (https://pytorch.org/) (other versions should also work but not tested)

  • Install python packages by:

      pip install -r requirements.txt
    

How to Run on the Sample Data

We provide a sample data taken from DeepCap dataset CVPR'20. To run the code on the sample data, first go to physcap_release directory and run:

python pipeline.py --contact_estimation 0 --floor_known 1 --floor_frame  data/floor_frame.npy  --humanoid_path asset/physcap.urdf --skeleton_filename asset/physcap.skeleton --motion_filename data/sample.motion --contact_path data/sample_contacts.npy --stationary_path data/sample_stationary.npy --save_path './results/'

To visualize the prediction, run:

python visualizer.py --q_path ./results/PhyCap_q.npy

To run PhysCap with its full functionality, the floor position should be given as 4x4 matrix (rotation and translation). In case you don't know the floor position, you can still run PhysCap with "--floor_known 0" option:

python pipeline.py --contact_estimation 0 --floor_known 0  --humanoid_path asset/physcap.urdf --skeleton_filename asset/physcap.skeleton --motion_filename data/sample.motion --save_path './results/'

How to Run on Your Data

  1. Run Stage I:

    we employ VNect for the stage I of PhysCap pipeline. Please install the VNect C++ library and use its prediction to run PhysCap. When running VNect, please replace "default.skeleton" with "physcap.skeleton" in asset folder that is compatible with PhysCap skeletion definition (physcap.urdf). After running VNect on your sequence, the predictions (motion.motion and ddd.mdd) will be saved under the specified folder. For this example, we assuem the predictions are saved under "data/VNect_data" folder.

  2. Run Stage II and III:

    First, run the following command to apply preprocessing on the 2D keypoints:

     python process_2Ds.py --input ./data/VNect_data/ddd.mdd --output ./data/VNect_data/ --smoothing 0
    

    The processed keypoints will be stored as "vnect_2ds.npy". Then run the following command to run Stage II and III:

     python pipeline.py --contact_estimation 1 --vnect_2d_path ./data/VNect_data/vnect_2ds.npy --save_path './results/' --floor_known 0 --humanoid_path asset/physcap.urdf --skeleton_filename asset/physcap.skeleton --motion_filename ./data/VNect_data/motion.motion --contact_path results/contacts.npy --stationary_path results/stationary.npy  
    

    In case you know the exact floor position, you can use the options --floor_known 1 --floor_frame /Path/To/FloorFrameFile

    To visualize the results, run:

     python visualizer.py --q_path ./results/PhyCap_q.npy
    

License Terms

Permission is hereby granted, free of charge, to any person or company obtaining a copy of this software and associated documentation files (the "Software") from the copyright holders to use the Software for any non-commercial purpose. Publication, redistribution and (re)selling of the software, of modifications, extensions, and derivates of it, and of other software containing portions of the licensed Software, are not permitted. The Copyright holder is permitted to publically disclose and advertise the use of the software by any licensee.

Packaging or distributing parts or whole of the provided software (including code, models and data) as is or as part of other software is prohibited. Commercial use of parts or whole of the provided software (including code, models and data) is strictly prohibited. Using the provided software for promotion of a commercial entity or product, or in any other manner which directly or indirectly results in commercial gains is strictly prohibited.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Citation

If the code is used, the licesnee is required to cite the use of VNect and the following publication in any documentation or publication that results from the work:

@article{
	PhysCapTOG2020,
	author = {Shimada, Soshi and Golyanik, Vladislav and Xu, Weipeng and Theobalt, Christian},
	title = {PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time},
	journal = {ACM Transactions on Graphics}, 
	month = {dec},
	volume = {39},
	number = {6}, 
	articleno = {235},
	year = {2020}, 
	publisher = {ACM}, 
	keywords = {physics-based, 3D, motion capture, real time}
} 
Owner
soratobtai
soratobtai
A Pytorch reproduction of Range Loss, which is proposed in paper 《Range Loss for Deep Face Recognition with Long-Tailed Training Data》

RangeLoss Pytorch This is a Pytorch reproduction of Range Loss, which is proposed in paper 《Range Loss for Deep Face Recognition with Long-Tailed Trai

Youzhi Gu 7 Nov 27, 2021
Speed-Test - You can check your intenet speed using this tool

Speed-Test Tool By Hez_X AVAILABLE ON : Termux & Kali linux & Ubuntu (Linux E

Hez-X 3 Feb 17, 2022
Code for 1st place solution in Sleep AI Challenge SNU Hospital

Sleep AI Challenge SNU Hospital 2021 Code for 1st place solution for Sleep AI Challenge (Note that the code is not fully organized) Refer to the notio

Saewon Yang 13 Jan 03, 2022
Confidence Propagation Cluster aims to replace NMS-based methods as a better box fusion framework in 2D/3D Object detection

CP-Cluster Confidence Propagation Cluster aims to replace NMS-based methods as a better box fusion framework in 2D/3D Object detection, Instance Segme

Yichun Shen 41 Dec 08, 2022
Deep Image Matting implementation in PyTorch

Deep Image Matting Deep Image Matting paper implementation in PyTorch. Differences "fc6" is dropped. Indices pooling. "fc6" is clumpy, over 100 millio

Yang Liu 724 Dec 27, 2022
PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch.

snn-localization repo PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch. Install Dependencies Orig

Sami BARCHID 1 Jan 06, 2022
Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval (NeurIPS'21)

Baleen Baleen is a state-of-the-art model for multi-hop reasoning, enabling scalable multi-hop search over massive collections for knowledge-intensive

Stanford Future Data Systems 22 Dec 05, 2022
Pytorch implementation of the unsupervised object discovery method LOST.

LOST Pytorch implementation of the unsupervised object discovery method LOST. More details can be found in the paper: Localizing Objects with Self-Sup

Valeo.ai 189 Dec 25, 2022
Alphabetical Letter Recognition

BayeesNetworks-Image-Classification Alphabetical Letter Recognition In these demo we are using "Bayees Networks" Our database is composed by Learning

Mohammed Firass 4 Nov 30, 2021
A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.

sam4onnx A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for

Katsuya Hyodo 6 May 15, 2022
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images This repository contains the implementation of our paper MetaAvatar: Learni

sfwang 96 Dec 13, 2022
This repo contains the code for paper Inverse Weighted Survival Games

Inverse-Weighted-Survival-Games This repo contains the code for paper Inverse Weighted Survival Games instructions general loss function (--lfn) can b

3 Jan 12, 2022
UFT - Universal File Transfer With Python

UFT 2.0.0 UFT (Universal File Transfer) is a CLI tool , which can be used to upl

Merwin 1 Feb 18, 2022
Voxel-based Network for Shape Completion by Leveraging Edge Generation (ICCV 2021, oral)

Voxel-based Network for Shape Completion by Leveraging Edge Generation This is the PyTorch implementation for the paper "Voxel-based Network for Shape

10 Dec 04, 2022
The codes and models in 'Gaze Estimation using Transformer'.

GazeTR We provide the code of GazeTR-Hybrid in "Gaze Estimation using Transformer". We recommend you to use data processing codes provided in GazeHub.

65 Dec 27, 2022
Code accompanying the paper on "An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers" published at NeurIPS, 2021

Code for "An Empirical Investigation of Domian Generalization with Empirical Risk Minimizers" (NeurIPS 2021) Motivation and Introduction Domain Genera

Meta Research 15 Dec 27, 2022
Portfolio asset allocation strategies: from Markowitz to RNNs

Portfolio asset allocation strategies: from Markowitz to RNNs Research project to explore different approaches for optimal portfolio allocation starti

Luigi Filippo Chiara 1 Feb 05, 2022
An open-source Deep Learning Engine for Healthcare that aims to treat & prevent major diseases

AlphaCare Background AlphaCare is a work-in-progress, open-source Deep Learning Engine for Healthcare that aims to treat and prevent major diseases. T

Siraj Raval 44 Nov 05, 2022
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing

Anycost GAN video | paper | website Anycost GANs for Interactive Image Synthesis and Editing Ji Lin, Richard Zhang, Frieder Ganz, Song Han, Jun-Yan Zh

MIT HAN Lab 726 Dec 28, 2022
object recognition with machine learning on Respberry pi

Respberrypi_object-recognition object recognition with machine learning on Respberry pi line.py 建立一支與樹梅派連線的 linebot 使用此 linebot 遠端控制樹梅派拍照 config.ini l

1 Dec 11, 2021