Official PyTorch implementation of the paper "TEMOS: Generating diverse human motions from textual descriptions"

Overview

TEMOS: TExt to MOtionS

Generating diverse human motions from textual descriptions

Description

Official PyTorch implementation of the paper "TEMOS: Generating diverse human motions from textual descriptions".

Please visit our webpage for more details.

teaser_lightteaser_dark

Bibtex

If you find this code useful in your research, please cite:

@article{petrovich22temos,
  title     = {{TEMOS}: Generating diverse human motions from textual descriptions},
  author    = {Petrovich, Mathis and Black, Michael J. and Varol, G{\"u}l},
  journal   = {arXiv},
  month     = {April},
  year      = {2022}
}

You can also put a star , if the code is useful to you.

Installation 👷

1. Create conda environment

conda create python=3.9 --name temos
conda activate temos

Install PyTorch 1.10 inside the conda environnement, and install the following packages:

pip install pytorch_lightning --upgrade
pip install torchmetrics==0.7
pip install hydra-core --upgrade
pip install hydra_colorlog --upgrade
pip install shortuuid
pip install tqdm
pip install pandas
pip install transformers
pip install psutil
pip install einops

The code was tested on Python 3.9.7 and PyTorch 1.10.0.

2. Download the datasets

KIT Motion-Language dataset

Be sure to read and follow their license agreements, and cite accordingly.

Use the code from Ghosh et al. or JL2P to download and prepare the kit dataset (extraction of xyz joints coodinates data from axis-angle Master Motor Map). Move or copy all the files which ends with "_meta.json", "_annotations.json" and "_fke.csv" inside the datasets/kit folder. "

AMASS dataset

WIP: instructions to be released soon

3. Download text model dependencies

Download distilbert from Hugging Face

cd deps/
git lfs install
git clone https://huggingface.co/distilbert-base-uncased
cd ..

4. SMPL body model

WIP: instructions to be released soon

5. (Optional) Donwload pre-trained models

WIP: instructions to be released soon

How to train TEMOS 🚀

The command to launch a training experiment is the folowing:

python train.py [OPTIONS]

The parsing is done by using the powerful Hydra library. You can override anything in the configuration by passing arguments like foo=value or foo.bar=value.

Experiment path

Each training will create a unique output directory (referred to as FOLDER below), where logs, configuations and checkpoints are stored.

By default it is defined as outputs/${data.dataname}/${experiment}/${run_id} with data.dataname the name of the dataset (see examples below), experiment=baseline and run_id a 8 unique random alpha-numeric identifier for the run (everything can be overridden if needed).

This folder is printed during logging, it should look like outputs/kit-mmm-xyz/baseline/3gn7h7v6/.

Some optional parameters

Datasets

  • data=kit-mmm-xyz: KIT-ML motions processed by the MMM framework (as in the original data) loaded as xyz joint coordinates (after axis-angle transformation → xyz) (by default)
  • data=kit-amass-rot: KIT-ML motions loaded as SMPL rotations and translations, from AMASS (processed with MoSh++)
  • data=kit-amass-xyz: KIT-ML motions loaded as xyz joint coordinates, from AMASS (processed with MoSh++) after passing through a SMPL layer and regressing the correct joints.

Training

  • trainer=gpu: training with CUDA, on an automatically selected GPU (default)
  • trainer=cpu: training on the CPU

How to generate motions with TEMOS

Dataset splits

To get results comparable to previous work, we use the same splits as in Language2Pose and Ghosh et al.. To be explicit, and not rely on random seeds, you can find the list of id-files in datasets/kit-splits/ (train/val/test).

When sampling Ghosh et al.'s motions with their code, I noticed that their dataloader is missing some sequences (see the discussion here). In order to compare all the methods with the same test set, we use the 520 sequences produced by Ghosh et al. code for the test set (instead of the 587 sequences). This split is refered as gtest (for "Ghosh test"). It is used per default in the sampling/evaluation/rendering code. You can change this set by specifying split=SPLIT in each command line.

You can also find in datasets/kit-splits/, the split used for the human-study (human-study) and the split used for the visuals of the paper (visu).

Sampling/generating motions

The command line to sample one motion per sequence is the following:

python sample.py folder=FOLDER [OPTIONS]

This command will create the folder FOLDER/samples/SPLIT and save the motions in the npy format.

Some optional parameters

  • mean=false: Take the mean value for the latent vector, instead of sampling (default is false)
  • number_of_samples=X: Generate X motions (by default it generates only one)
  • fact=X: Multiplies sigma by X during sampling (1.0 by default, diversity can be increased when fact>1)

Model trained on SMPL rotations

If your model has been trained with data=kit-amass-rot, it produces SMPL rotations and translations. In this case, you can specify the type of data you want to save after passing through the SMPL layer.

  • jointstype=mmm: Generate xyz joints compatible with the MMM bodies (by default). This gives skeletons comparable to data=kit-mmm-xyz (needed for evaluation).
  • jointstype=vertices: Generate human body meshes (needed for rendering).

Evaluating TEMOS (and prior works)

To evaluate TEMOS on the metrics defined in the paper, you must generate motions first (see above), and then run:

python evaluate.py folder=FOLDER [OPTIONS]

This will compute and store the metrics in the file FOLDER/samples/metrics_SPLIT in a yaml format.

Some optional parameters

Same parameters as in sample.py, it will choose the right directories for you. In the case of evaluating with number_of_samples>1, the script will compute two metrics metrics_gtest_multi_avg (the average of single metrics) and metrics_gtest_multi_best (chosing the best output for each motion). Please check the paper for more details.

Model trained on SMPL rotations

Currently, evaluation is only implemented on skeletons with MMM format. You must therefore use jointstype=mmm during sampling.

Evaluating prior works

WIP: the proper instructions and code will be available soon.

To give an overview:

  1. Generate motions with their code (it is still in the rifke feature space)
  2. Save them in xyz format (I "hack" their render script, to save them in xyz npy format instead of rendering)
  3. Load them into the evaluation code (instead of loading TEMOS motions).

Rendering motions

To get the visuals of the paper, I use Blender 2.93. The setup is not trivial (installation + running), I do my best to explain the process but don't hesitate to tell me if you have a problem.

Instalation

The goal is to be able to install blender so that it can be used with python scripts (so we can use ``import bpy''). There seem to be many different ways to do this, I will explain the one I use and understand (feel free to use other methods or suggest an easier way). The installation of Blender will be done as a standalone package. To use my scripts, we will run blender in the background, and the python executable in blender will run the script.

In any case, after the installation, please do step 5. to install the dependencies in the python environment.

  1. Please follow the instructions to install blender 2.93 on your operating system. Please install exactly this version.
  2. Locate the blender executable if it is not in your path. For the following commands, please replace blender with the path to your executable (or create a symbolic link or use an alias).
    • On Linux, it could be in /usr/bin/blender (already in your path).
    • On macOS, it could be in /Applications/Blender.app/Contents/MacOS/Blender (not in your path)
  3. Check that the correct version is installed:
    • blender --background --version should return "Blender 2.93.X".
    • blender --background --python-expr "import sys; print('\nThe version of python is '+sys.version.split(' ')[0])" should return "3.9.X".
  4. Locate the python installation used by blender the following line. I will refer to this path as path/to/blender/python.
blender --background --python-expr "import sys; import os; print('\nThe path to the installation of python of blender can be:'); print('\n'.join(['- '+x for x in sys.path if 'python' in (file:=os.path.split(x)[-1]) and not file.endswith('.zip')]))"
  1. Install these packages in the python environnement of blender:
path/to/blender/python -m pip install --user numpy
path/to/blender/python -m pip install --user matplotlib
path/to/blender/python -m pip install --user hydra-core --upgrade
path/to/blender/python -m pip install --user hydra_colorlog --upgrade
path/to/blender/python -m pip install --user moviepy
path/to/blender/python -m pip install --user shortuuid

Launch a python script (with arguments) with blender

Now that blender is installed, if we want to run the script script.py with the blender API (the bpy module), we can use:

blender --background --python script.py

If you need to add additional arguments, this will probably fail (as blender will interpret the arguments). Please use the double dash -- to tell blender to ignore the rest of the command. I then only parse the last part of the command (check temos/launch/blender.py if you are interested).

Rendering one sample

To render only one motion, please use this command line:

blender --background --python render.py -- npy=PATH_TO_DATA.npy [OPTIONS]

Rendering all the data

Please use this command line to render all the data of a split (which have to be already generated with sample.py). I suggest to use split=visu to render only a small subset.

blender --background --python render.py -- folder=FOLDER [OPTIONS]

SMPL bodies

Don't forget to generate the data with the option jointstype=vertices before. The renderer will automatically detect whether the movement is a sequence of joints or meshes.

Some optional parameters

  • downsample=true: Render only 1 frame every 8 frames, to speed up rendering (by default)
  • canonicalize=true: Make sure the first pose is oriented canonically (by translating and rotating the entire sequence) (by default)
  • mode=XXX: Choose the rendering mode (default is mode=sequence)
    • video: Render all the frames and generate a video (as in the supplementary video)
    • sequence: Render a single frame, with num=8 bodies (sampled equally, as in the figures of the paper)
    • frame: Render a single frame, at a specific point in time (exact_frame=0.5, generates the frame at about 50% of the video)
  • quality=false: Render to a higher resolution and denoise the output (default to false to speed up))

License

This code is distributed under an MIT LICENSE.

Note that our code depends on other libraries, including SMPL, SMPL-X, PyTorch3D, Hugging Face, Hydra, and uses datasets which each have their own respective licenses that must also be followed.

Owner
Mathis Petrovich
PhD student mainly interested in Human Body Shape Analysis, Computer Vision and Optimal Transport.
Mathis Petrovich
HomoInterpGAN - Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation

HomoInterpGAN Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation (CVPR 2019, oral) Installation The implementation is base

Ying-Cong Chen 99 Nov 15, 2022
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)

Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021) Hang Zhou, Yasheng Sun, Wayne Wu, Chen Cha

Hang_Zhou 628 Dec 28, 2022
Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021.

Conformal time-series forecasting Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021. If you use our code in yo

Kamilė Stankevičiūtė 36 Nov 21, 2022
MMDetection3D is an open source object detection toolbox based on PyTorch

MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project developed by MMLab.

OpenMMLab 3.2k Jan 05, 2023
Implementation for NeurIPS 2021 Submission: SparseFed

READ THIS FIRST This repo is an anonymized version of an existing repository of GitHub, for the AIStats 2021 submission: SparseFed: Mitigating Model P

2 Jun 15, 2022
Use unsupervised and supervised learning to predict stocks

AIAlpha: Multilayer neural network architecture for stock return prediction This project is meant to be an advanced implementation of stacked neural n

Vivek Palaniappan 1.5k Dec 26, 2022
Tensorflow-Project-Template - A best practice for tensorflow project template architecture.

Tensorflow Project Template A simple and well designed structure is essential for any Deep Learning project, so after a lot of practice and contributi

Mahmoud G. Salem 3.6k Dec 22, 2022
9th place solution in "Santa 2020 - The Candy Cane Contest"

Santa 2020 - The Candy Cane Contest My solution in this Kaggle competition "Santa 2020 - The Candy Cane Contest", 9th place. Basic Strategy In this co

toshi_k 22 Nov 26, 2021
SPRING is a seq2seq model for Text-to-AMR and AMR-to-Text (AAAI2021).

SPRING This is the repo for SPRING (Symmetric ParsIng aNd Generation), a novel approach to semantic parsing and generation, presented at AAAI 2021. Wi

Sapienza NLP group 98 Dec 21, 2022
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation This is a pytorch project for the paper Dynamic Divide-and-Conquer Ad

DV Lab 29 Nov 21, 2022
Fast SHAP value computation for interpreting tree-based models

FastTreeSHAP FastTreeSHAP package is built based on the paper Fast TreeSHAP: Accelerating SHAP Value Computation for Trees published in NeurIPS 2021 X

LinkedIn 369 Jan 04, 2023
System-oriented IR evaluations are limited to rather abstract understandings of real user behavior

Validating Simulations of User Query Variants This repository contains the scripts of the experiments and evaluations, simulated queries, as well as t

IR Group at Technische Hochschule Köln 2 Nov 23, 2022
Codes for CVPR2021 paper "PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical Embedding Mask Optimization"

PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical Embedding Mask Optimization (CVPR 2021) This is the official implementation of PW

Intelligent Robotics and Machine Vision Lab 42 Dec 18, 2022
This a classic fintech problem that introduces real life difficulties such as data imbalance. Check out the notebook to find out more!

Credit Card Fraud Detection Introduction Online transactions have become a crucial part of any business over the years. Many of those transactions use

Jonathan Hasbani 0 Jan 20, 2022
Differentiable Simulation of Soft Multi-body Systems

Differentiable Simulation of Soft Multi-body Systems Yi-Ling Qiao, Junbang Liang, Vladlen Koltun, Ming C. Lin [Paper] [Code] Updates The C++ backend s

YilingQiao 26 Dec 23, 2022
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior. The code will release soon. Implementation Python3 PyTorch=1.0 NVIDIA GPU+

FengZhang 34 Dec 04, 2022
Layer 7 DDoS Panel with Cloudflare Bypass ( UAM, CAPTCHA, BFM, etc.. )

Blood Deluxe DDoS DDoS Attack Panel includes CloudFlare Bypass (UAM, CAPTCHA, BFM, etc..)(It works intermittently. Working on it) Don't attack any web

272 Nov 01, 2022
Assessing the Influence of Models on the Performance of Reinforcement Learning Algorithms applied on Continuous Control Tasks

Assessing the Influence of Models on the Performance of Reinforcement Learning Algorithms applied on Continuous Control Tasks This is the master thesi

Giacomo Arcieri 1 Mar 21, 2022
Simple implementation of Mobile-Former on Pytorch

Simple-implementation-of-Mobile-Former At present, only the model but no trained. There may be some bug in the code, and some details may be different

Acheung 103 Dec 31, 2022