We are More than Our JOints: Predicting How 3D Bodies Move

Overview

We are More than Our JOints: Predicting How 3D Bodies Move

Citation

This repo contains the official implementation of our paper MOJO:

@inproceedings{Zhang:CVPR:2021,
  title = {We are More than Our Joints: Predicting how {3D} Bodies Move},
  author = {Zhang, Yan and Black, Michael J. and Tang, Siyu},
  booktitle = {Proceedings IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2021},
  month_numeric = {6}
}

License

We employ CC BY-NC-SA 4.0 for the MOJO code, which covers

models/fittingop.py
experiments/utils/batch_gen_amass.py
experiments/utils/utils_canonicalize_amass.py
experiments/utils/utils_fitting_jts2mesh.py
experiments/utils/vislib.py
experiments/vis_*_amass.py

The rest part are developed based on DLow. According to their license, the implementation follows its CMU license.

Environment & code structure

  • Tested OS: Linux Ubuntu 18.04
  • Packages:
  • Note: All scripts should be run from the root of this repo to avoid path issues. Also, please fix some path configs in the code, otherwise errors will occur.

Training

The training is split to two steps. Provided we have a config file in experiments/cfg/amass_mojo_f9_nsamp50.yml, we can do

  • python experiments/train_MOJO_vae.py --cfg amass_mojo_f9_nsamp50 to train the MOJO
  • python experiments/train_MOJO_dlow.py --cfg amass_mojo_f9_nsamp50 to train the DLow

Evaluation

These experiments/eval_*.py files are for evaluation. For eval_*_pred.py, they can be used either to evaluate the results while predicting, or to save results to a file for further evaluation and visualization. An example is python experiments/eval_kps_pred.py --cfg amass_mojo_f9_nsamp50 --mode vis, which is to save files to the folder results/amass_mojo_f9_nsamp50.

Generation

In MOJO, the recursive projection scheme is to get 3D bodies from markers and keep the body valid. The relevant implementation is mainly in models/fittingop.py and experiments/test_recursive_proj.py. An example to run is

python experiments/test_recursive_proj.py --cfg amass_mojo_f9_nsamp50 --testdata ACCAD --gpu_index 0

Datasets

In MOJO, we have used AMASS, Human3.6M, and HumanEva.

For Human3.6M and HumanEva, we follow the same pre-processing step as in DLow, VideoPose3D, and others. Please refer to their pages, e.g. this one, for details.

For AMASS, we perform canonicalization of motion sequences with our own procedures. The details are in experiments/utils_canonicalize_amass.py. We find this sequence canonicalization procedure is important. The canonicalized AMASS used in our work can be downloaded here, which includes the random sample names of ACCAD and BMLhandball used in our experiments about motion realism.

Models

For human body modeling, we employ the SMPL-X parametric body model. You need to follow their license and download. Based on SMPL-X, we can use the body joints or a sparse set of body mesh vertices (the body markers) to represent the body.

  • CMU It has 41 markers, the corresponding SMPL-X mesh vertex ID can be downloaded here.
  • SSM2 It has 64 markers, the corresponding SMPL-X mesh vertex ID can be downloaded here.
  • Joints We used 22 joints. No need to download, but just obtain them from the SMPL-X body model. See details in the code.

Our CVAE model configurations are in experiments/cfg. The pre-trained checkpoints can be downloaded here.

Related projects

  • AMASS: It unifies diverse motion capture data with the SMPL-H model, and provides a large-scale high-quality dataset. Its official codebase and tutorials are in this github repo.

  • GRAB: Most mocap data only contains the body motion. GRAB, however, provides high-quality data of human-object interactions. Besides capturing the body motion, the object motion and the hand-object contact are captured simultaneously. More demonstrations are in its github repo.

Acknowledgement & disclaimer

We thank Nima Ghorbani for the advice on the body marker setting and the {\bf AMASS} dataset. We thank Yinghao Huang, Cornelia K"{o}hler, Victoria Fern'{a}ndez Abrevaya, and Qianli Ma for proofreading. We thank Xinchen Yan and Ye Yuan for discussions on baseline methods. We thank Shaofei Wang and Siwei Zhang for their help with the user study and the presentation, respectively.

MJB has received research gift funds from Adobe, Intel, Nvidia, Facebook, and Amazon. While MJB is a part-time employee of Amazon, his research was performed solely at, and funded solely by, Max Planck. MJB has financial interests in Amazon Datagen Technologies, and Meshcapade GmbH.

Source code of the paper PatchGraph: In-hand tactile tracking with learned surface normals.

PatchGraph This repository contains the source code of the paper PatchGraph: In-hand tactile tracking with learned surface normals. Installation Creat

Paloma Sodhi 11 Dec 15, 2022
September-Assistant - Open-source Windows Voice Assistant

September - Windows Assistant September is an open-source Windows personal assis

The Nithin Balaji 9 Nov 22, 2022
This repository contains code, network definitions and pre-trained models for working on remote sensing images using deep learning

Deep learning for Earth Observation This repository contains code, network definitions and pre-trained models for working on remote sensing images usi

Nicolas Audebert 447 Jan 05, 2023
implicit displacement field

Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields [project page][paper][cite] Geometry-Consistent Neural Shape Represe

Yifan Wang 100 Dec 19, 2022
GrailQA: Strongly Generalizable Question Answering

GrailQA is a new large-scale, high-quality KBQA dataset with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It ca

OSU DKI Lab 76 Dec 21, 2022
Source code for "Roto-translated Local Coordinate Framesfor Interacting Dynamical Systems"

Roto-translated Local Coordinate Frames for Interacting Dynamical Systems Source code for Roto-translated Local Coordinate Frames for Interacting Dyna

Miltiadis Kofinas 19 Nov 27, 2022
IEGAN — Official PyTorch Implementation Independent Encoder for Deep Hierarchical Unsupervised Image-to-Image Translation

IEGAN — Official PyTorch Implementation Independent Encoder for Deep Hierarchical Unsupervised Image-to-Image Translation Independent Encoder for Deep

30 Nov 05, 2022
Self-supervised Multi-modal Hybrid Fusion Network for Brain Tumor Segmentation

JBHI-Pytorch This repository contains a reference implementation of the algorithms described in our paper "Self-supervised Multi-modal Hybrid Fusion N

FeiyiFANG 5 Dec 13, 2021
This project hosts the code for implementing the ISAL algorithm for object detection and image classification

Influence Selection for Active Learning (ISAL) This project hosts the code for implementing the ISAL algorithm for object detection and image classifi

25 Sep 11, 2022
Blender Python - Node-based multi-line text and image flowchart

MindMapper v0.8 Node-based text and image flowchart for Blender Mindmap with shortcuts visible: Mindmap with shortcuts hidden: Notes This was requeste

SpectralVectors 58 Oct 08, 2022
Python KNN model: Predicting a probability of getting a work visa. Tableau: Non-immigrant visas over the years.

The value of international students to the United States. Probability of getting a non-immigrant visa. Project timeline: Jan 2021 - April 2021 Project

Zinaida Dvoskina 2 Nov 21, 2021
Combinatorially Hard Games where the levels are procedurally generated

puzzlegen Implementation of two procedurally simulated environments with gym interfaces. IceSlider: the agent needs to reach and stop on the pink squa

Autonomous Learning Group 3 Jun 26, 2022
A mini library for Policy Gradients with Parameter-based Exploration, with reference implementation of the ClipUp optimizer from NNAISENSE.

PGPElib A mini library for Policy Gradients with Parameter-based Exploration [1] and friends. This library serves as a clean re-implementation of the

NNAISENSE 56 Jan 01, 2023
COPA-SSE contains crowdsourced explanations for the Balanced COPA dataset

COPA-SSE Repository for COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning. COPA-SSE contains crowdsourced explanations for the Balanced

Ana Brassard 5 Jul 31, 2022
An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Fast Face Classification (F²C) This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicit

33 Jun 27, 2021
A repo with study material, exercises, examples, etc for Devnet SPAUTO

MPLS in the SDN Era -- DevNet SPAUTO Get right to the study material: Checkout the Wiki! A lab topology based on MPLS in the SDN era book used for 30

Hugo Tinoco 67 Nov 16, 2022
Old Photo Restoration (Official PyTorch Implementation)

Bringing Old Photo Back to Life (CVPR 2020 oral)

Microsoft 11.3k Dec 30, 2022
Breaking the Dilemma of Medical Image-to-image Translation

Breaking the Dilemma of Medical Image-to-image Translation Supervised Pix2Pix and unsupervised Cycle-consistency are two modes that dominate the field

Kid Liet 86 Dec 21, 2022
ICON: Implicit Clothed humans Obtained from Normals

ICON: Implicit Clothed humans Obtained from Normals arXiv, December 2021. Yuliang Xiu · Jinlong Yang · Dimitrios Tzionas · Michael J. Black Table of C

Yuliang Xiu 1.1k Dec 30, 2022
The official implementation of Theme Transformer

Theme Transformer This is the official implementation of Theme Transformer. Checkout our demo and paper : Demo | arXiv Environment: using python versi

Ian Shih 85 Dec 08, 2022