Code for ECCV 2020 paper "Contacts and Human Dynamics from Monocular Video".

Overview

Contact and Human Dynamics from Monocular Video

This is the official implementation for the ECCV 2020 spotlight paper by Davis Rempe, Leonidas J. Guibas, Aaron Hertzmann, Bryan Russell, Ruben Villegas, and Jimei Yang. For more information, see the project webpage.

Teaser

Environment Setup

Note: the code in this repo has only been tested on Ubuntu 16.04.

First create and activate a virtual environment to install dependencies for the code in this repo. For example with conda:

  • conda create -n contact_dynamics_env python=3.6
  • conda activate contact_dynamics_env
  • pip install -r requirements.txt

Note the package versions in the requirements file are the exact ones tested on, but may need to be modified for your system. The code also uses ffmpeg.

This codebase requires the installation of a number of external dependencies that have their own installation instructions/environments, e.g., you will likely want to create a different environment just to run Monocular Total Capture below. The following external dependencies are only necessary to run the full pipeline (both contact detection and physical optimization). If you're only interested in detecting foot contacts, it is only necessary to install OpenPose.

To get started, from the root of this repo mkdir external.

Monocular Total Capture (MTC)

The full pipeline runs on the output from Monocular Total Capture (MTC). To run MTC, you must clone this fork which contains a number of important modifications:

  • cd external
  • git clone https://github.com/davrempe/MonocularTotalCapture.git
  • Follow installation instructions in that repo to set up the MTC environment.

TOWR

The physics-based optimization takes advantage of the TOWR library. Specifically, this fork must be used:

  • cd external
  • git clone https://github.com/davrempe/towr.git
  • Follow the intallation instructions to build and install the library using cmake.

Building Physics-Based Optimization

Important Note: if you did not use the HSL routines when building IPopt as suggested, in towr_phys_optim/phys_optim.cpp you will need to change the line solver->SetOption("linear_solver", "MA57"); to solver->SetOption("linear_solver", "mumps"); before building our physics-based optimization. This uses the slower MUMPS solver and should be avoided if possible.

After building and installing TOWR, we must build the physics-based optimization part of the pipeline. To do this from the repo root:

cd towr_phys_optim
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make

Downloads

Synthetic Dataset

The synthetic dataset used to train our foot contact detection network contains motion sequences on various Mixamo characters. For each sequence, the dataset contains rendered videos from 2 different camera viewpoints, camera parameters, annotated foot contacts, detected 2D pose (with OpenPose), and the 3D motion as a bvh file. Note, this dataset is only needed if you want to retrain the contact detection network.

To download the dataset:

  • cd data
  • bash download_full.sh to download the full (52 GB) dataset or bash download_sample.sh for a sample version (715 MB) with limited motions from 2 characters.

Pretrained Weights

To download pretrained weights for the foot contact detection network, run:

  • cd pretrained_weights
  • bash download.sh

Running the Pipeline on Real Videos

Next we'll walk through running each part of the pipeline on a set of real-world videos. A small example dataset with 2 videos is provided in data/example_data. Data should always be structured as shown in example_data where each video is placed in its own directory named the same as the video file to be processed - inputs and outputs for parts of the pipeline will be saved in these directories. There is a helper script to create this structure from a directory of videos.

The first two steps in the pipeline are running MTC/OpenPose on the video to get 3D/2D pose inputs, followed by foot contact detection using the 2D poses.

Running MTC

The first step is to run MTC and OpenPose. This will create the necessary data (2D and 3D poses) to run both foot contact detection and physical optimization.

The scripts/run_totalcap.py is used to run MTC. It is invoked on a directory containg any number of videos, each in their own directory, and will run MTC on all contained videos. The script runs MTC, post-processes the results to be used in the rest of the pipeline, and saves videos visualizing the final output. The script copies all the needed outputs (in particular tracked_results.json and the OpenPose detection openpose_results directly to the given data directory). To run MTC for the example data, first cd scripts then:

python run_totalcap.py --data ../data/example_data --out ../output/mtc_viz_out --totalcap ../external/MonocularTotalCapture

Alternatively, if you only want to do foot contact detection (and don't care about the physical optimization), you can instead run OpenPose by itself without MTC. There is a helper script to do this in scripts:

python run_openpose.py --data ../data/example_data --out ../data/example_data --openpose ../external/openpose --hands --face --save-video

This runs OpenPose and saves the outputs directly to the same data directory for later use in contact detection.

Foot Contact Detection

The next step is using the learned neural network to detect foot contacts from the 2D pose sequence.

To run this, first download the pretrained network weights as detailed above. Then to run on the example data cd scripts and then:

python run_detect_contacts.py --data ../data/example_data --weights ../pretrained_weights/contact_detection_weights.pth

This will detect and save foot contacts for each video in the data directory to a file called foot_contacts.npy. This is simply an Fx4 array where F is the number of frames; for each frame there is a binary contact label for the left heel, left toe, right heel, and right toe, in that order.

You may also optionally add the --viz flag to additionally save a video with overlaid detections (currently requires a lot of memory for videos more than a few seconds long).

Trajectory Optimization

Finally, we are able to run the kinematic optimization, retargeting, and physics-based optimization steps.

There is a single script to run all these - simply make sure you are in the scripts directory, then run:

python run_phys_mocap.py --data ../data/example_data --character ybot

This command will do the optimization directly on the YBot Mixamo character (ty and skeletonzombie are also availble). To perform the optimization on the skeleton estimated from video (i.e., to not use the retargeting step), give the argument --character combined.

Each of the steps in this pipeline can be run individually if desired, see how to do this in run_phys_mocap.py.

Visualize Results with Blender

We can visualize results on a character using Blender. Before doing this, ensure Blender v2.79b is installed.

You will first need to download the Blender scene we use for rendering. From the repo root cd data then bash download_viz.sh will place viz_scene.blend in the data directory. Additionally, you need to download the character T-pose FBX file from the Mixamo website; in this example we are using the YBot character.

To visualize the result for a sequence, make sure you are in the src directory and use something like:

blender -b -P viz/viz_blender.py -- --results ../data/example_data/dance1 --fbx ../data/fbx/ybot.fbx --scene ../data/viz_scene.blend --character ybot --out ../output/rendered_res --fps 24 --draw-com --draw-forces

Note that there are many options to customize this rendering - please see the script for all these. Also the side view is set up heuristically, you may need to manually tune setup_camera depending on your video.

Training and Testing Contact Detection Network on Synthetic Data

To re-train the contact detection network on the synthetic dataset and run inference on the test set use the following:

>> cd src
# Train the contact detection network
>> python contact_learning/train.py --data ../data/synthetic_dataset --out ../output/contact_learning_results
# Run detection on the test set
>> python contact_learning/test.py --data ../data/synthetic_dataset --out ../output/contact_learning_results --weights-path ../output/contact_learning_results/op_only_weights_BEST.pth --full-video

Citation

If you found this code or paper useful, please consider citing:

@inproceedings{RempeContactDynamics2020,
    author={Rempe, Davis and Guibas, Leonidas J. and Hertzmann, Aaron and Russell, Bryan and Villegas, Ruben and Yang, Jimei},
    title={Contact and Human Dynamics from Monocular Video},
    booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
    year={2020}
}

Questions?

If you run into any problems or have questions, please create an issue or contact Davis ([email protected]).

Owner
Davis Rempe
Davis Rempe
A collection of resources and papers on Diffusion Models, a darkhorse in the field of Generative Models

This repository contains a collection of resources and papers on Diffusion Models and Score-based Models. If there are any missing valuable resources

5.1k Jan 08, 2023
[3DV 2021] A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks

dispersion-score Official implementation of 3DV 2021 Paper A Dataset-dispersion Perspective on Reconstruction versus Recognition in Single-view 3D Rec

Yefan 7 May 28, 2022
Group Fisher Pruning for Practical Network Compression(ICML2021)

Group Fisher Pruning for Practical Network Compression (ICML2021) By Liyang Liu*, Shilong Zhang*, Zhanghui Kuang, Jing-Hao Xue, Aojun Zhou, Xinjiang W

Shilong Zhang 129 Dec 13, 2022
Multi-Anchor Active Domain Adaptation for Semantic Segmentation (ICCV 2021 Oral)

Multi-Anchor Active Domain Adaptation for Semantic Segmentation Munan Ning*, Donghuan Lu*, Dong Wei†, Cheng Bian, Chenglang Yuan, Shuang Yu, Kai Ma, Y

Munan Ning 36 Dec 07, 2022
Point detection through multi-instance deep heatmap regression for sutures in endoscopy

Suture detection PyTorch This repo contains the reference implementation of suture detection model in PyTorch for the paper Point detection through mu

artificial intelligence in the area of cardiovascular healthcare 3 Jul 16, 2022
Implementation of "Bidirectional Projection Network for Cross Dimension Scene Understanding" CVPR 2021 (Oral)

Bidirectional Projection Network for Cross Dimension Scene Understanding CVPR 2021 (Oral) [ Project Webpage ] [ arXiv ] [ Video ] Existing segmentatio

Hu Wenbo 135 Dec 26, 2022
Vehicle detection using machine learning and computer vision techniques for Udacity's Self-Driving Car Engineer Nanodegree.

Vehicle Detection Video demo Overview Vehicle detection using these machine learning and computer vision techniques. Linear SVM HOG(Histogram of Orien

hata 1.1k Dec 18, 2022
NumQMBasic - A mini-course offered to Undergrad physics students

The best way to use this material is by forking it by click the Fork button at the top, right corner. Then you will get your own copy to play with! Th

Raghu 35 Dec 05, 2022
Official source code to CVPR'20 paper, "When2com: Multi-Agent Perception via Communication Graph Grouping"

When2com: Multi-Agent Perception via Communication Graph Grouping This is the PyTorch implementation of our paper: When2com: Multi-Agent Perception vi

34 Nov 09, 2022
Automatic Data-Regularized Actor-Critic (Auto-DrAC)

Auto-DrAC: Automatic Data-Regularized Actor-Critic This is a PyTorch implementation of the methods proposed in Automatic Data Augmentation for General

89 Dec 13, 2022
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
Look Who’s Talking: Active Speaker Detection in the Wild

Look Who's Talking: Active Speaker Detection in the Wild Dependencies pip install -r requirements.txt In addition to the Python dependencies, ffmpeg

Clova AI Research 60 Dec 08, 2022
Pytorch implementation for ACMMM2021 paper "I2V-GAN: Unpaired Infrared-to-Visible Video Translation".

I2V-GAN This repository is the official Pytorch implementation for ACMMM2021 paper "I2V-GAN: Unpaired Infrared-to-Visible Video Translation". Traffic

69 Dec 31, 2022
The official project of SimSwap (ACM MM 2020)

SimSwap: An Efficient Framework For High Fidelity Face Swapping Proceedings of the 28th ACM International Conference on Multimedia The official reposi

Six_God 2.6k Jan 08, 2023
Original code for "Zero-Shot Domain Adaptation with a Physics Prior"

Zero-Shot Domain Adaptation with a Physics Prior [arXiv] [sup. material] - ICCV 2021 Oral paper, by Attila Lengyel, Sourav Garg, Michael Milford and J

Attila Lengyel 40 Dec 21, 2022
Extracts essential Mediapipe face landmarks and arranges them in a sequenced order.

simplified_mediapipe_face_landmarks Extracts essential Mediapipe face landmarks and arranges them in a sequenced order. The default 478 Mediapipe face

Irfan 13 Oct 04, 2022
StarGANv2-VC: A Diverse, Unsupervised, Non-parallel Framework for Natural-Sounding Voice Conversion

StarGANv2-VC: A Diverse, Unsupervised, Non-parallel Framework for Natural-Sounding Voice Conversion Yinghao Aaron Li, Ali Zare, Nima Mesgarani We pres

Aaron (Yinghao) Li 282 Jan 01, 2023
Contrastive Language-Image Pretraining

CLIP [Blog] [Paper] [Model Card] [Colab] CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pair

OpenAI 11.5k Jan 08, 2023
NPBG++: Accelerating Neural Point-Based Graphics

[CVPR 2022] NPBG++: Accelerating Neural Point-Based Graphics Project Page | Paper This repository contains the official Python implementation of the p

Ruslan Rakhimov 57 Dec 03, 2022
Knowledge Management for Humans using Machine Learning & Tags

HyperTag HyperTag helps humans intuitively express how they think about their files using tags and machine learning.

Ravn Tech, Inc. 165 Nov 04, 2022