Populating 3D Scenes by Learning Human-Scene Interaction https://posa.is.tue.mpg.de/

Related tags

Deep LearningPOSA
Overview

Populating 3D Scenes by Learning Human-Scene Interaction

[Project Page] [Paper]

POSA Examples

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use the POSA data, model and software, (the "Data & Software"), including 3D meshes, images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Description

This repository contains the training, random sampling, and scene population code used for the experiments in POSA.

Installation

To install the necessary dependencies run the following command:

    pip install -r requirements.txt

The code has been tested with Python 3.7, CUDA 10.0, CuDNN 7.5 and PyTorch 1.7 on Ubuntu 20.04.

Dependencies

POSA_dir

To be able to use the code you need to get the POSA_dir.zip. After unzipping, you should have a directory with the following structure:

POSA_dir
├── cam2world
├── data
├── mesh_ds
├── scenes
├── sdf
└── trained_models

The content of each folder is explained below:

  • trained_models contains two trained models. One is trained on the contact only and the other one is trained on contact and semantics.
  • data contains the train and test data extracted from the PROX Dataset and PROX-E Dataset.
  • scenes contains the 12 scenes from PROX Dataset
  • sdf contains the signed distance field for the scenes in the previous folder.
  • mesh_ds contains mesh downsampling and upsampling related files similar to the ones in COMA.

SMPL-X

You need to get the SMPLx Body Model. Please extract the folder and rename it to smplx_models and place it in the POSA_dir above.

AGORA

In addition, you need to get the POSA_rp_poses.zip file from AGORA Dataset and extract in the POSA_dir. This file contrains a number of test poses to be used in the next steps. Note that you don't need the whole AGORA dataset.

Finally run the following command or add it to your ~/.bashrc

export POSA_dir=Path of Your POSA_dir

Inference

You can test POSA using the trained models provided. Below we provide examples of how to generate POSA features and how to pupulate a 3D scene.

Random Sampling

To generate random features from a trained model, run the following command

python src/gen_rand_samples.py --config cfg_files/contact.yaml --checkpoint_path $POSA_dir/trained_models/contact.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --render 1 --viz 1 --num_rand_samples 3 

Or

python src/gen_rand_samples.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --render 1 --viz 1 --num_rand_samples 3 

This will open a window showing the generated features for the specified pkl file. It also render the features to the folder random_samples in POSA_dir.

The number of generated feature maps can be controlled by the flag num_rand_samples.

If you don't have a screen, you can turn off the visualization --viz 0.

If you don't have CUDA installed then you can add this flag --use_cuda 0. This applies to all commands in this repository.

You can also run the same command on the whole folder of test poses

python src/gen_rand_samples.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses --render 1 --viz 1 --num_rand_samples 3 

Scene Population

Given a body mesh from the AGORA Dataset, POSA automatically places the body mesh in 3D scene.

python src/affordance.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --scene_name MPH16 --render 1 --viz 1 

This will open a window showing the placed body in the scene. It also render the placements to the folder affordance in POSA_dir.

You can control the number of placements for the same body mesh in a scene using the flag num_rendered_samples, default value is 1.

The generated feature maps can be shown by setting adding --show_gen_sample 1

You can also run the same script on the whole folder of test poses

python src/affordance.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses --scene_name MPH16 --render 1 --viz 1 

To place clothed body meshes, you need to first buy the Renderpeople assets, or get the free models. Create a folder rp_clothed_meshes in POSA_dir and place all the clothed body .obj meshes in this folder. Then run this command:

python src/affordance.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --scene_name MPH16 --render 1 --viz 1 --use_clothed_mesh 1

Testing on Your Own Poses

POSA has been tested on the AGORA dataset only. Nonetheless, you can try POSA with any SMPL-X poses you have. You just need a .pkl file with the SMPLX body parameters and the gender. Your SMPL-X vertices must be brought to a canonical form similar to the POSA training data. This means the vertices should be centered at the pelvis joint, the x axis pointing to the left, the y axis pointing backward, and the z axis pointing upwards. As shown in the figure below. The x,y,z axes are denoted by the red, green, blue colors respectively.

canonical_form

See the function pkl_to_canonical in data_utils.py for an example of how to do this transformation.

Training

To retrain POSA from scratch run the following command

python src/train_posa.py --config cfg_files/contact_semantics.yaml

Visualize Ground Truth Data

You can also visualize the training data

python src/show_gt.py --config cfg_files/contact_semantics.yaml --train_data 1

Or test data

python src/show_gt.py --config cfg_files/contact_semantics.yaml --train_data 0

Note that the ground truth data has been downsampled to speed up training as explained in the paper. See training details in appendices.

Citation

If you find this Model & Software useful in your research we would kindly ask you to cite:

@inproceedings{Hassan:CVPR:2021,
    title = {Populating {3D} Scenes by Learning Human-Scene Interaction},
    author = {Hassan, Mohamed and Ghosh, Partha and Tesch, Joachim and Tzionas, Dimitrios and Black, Michael J.},
    booktitle = {Proceedings {IEEE/CVF} Conf.~on Computer Vision and Pattern Recognition ({CVPR})},
    month = jun,
    month_numeric = {6},
    year = {2021}
}

If you use the extracted training data, scenes or sdf the please cite:

@inproceedings{PROX:2019,
  title = {Resolving {3D} Human Pose Ambiguities with {3D} Scene Constraints},
  author = {Hassan, Mohamed and Choutas, Vasileios and Tzionas, Dimitrios and Black, Michael J.},
  booktitle = {International Conference on Computer Vision},
  month = oct,
  year = {2019},
  url = {https://prox.is.tue.mpg.de},
  month_numeric = {10}
}
@inproceedings{PSI:2019,
  title = {Generating 3D People in Scenes without People},
  author = {Zhang, Yan and Hassan, Mohamed and Neumann, Heiko and Black, Michael J. and Tang, Siyu},
  booktitle = {Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2020},
  url = {https://arxiv.org/abs/1912.02923},
  month_numeric = {6}
}

If you use the AGORA test poses, the please cite:

@inproceedings{Patel:CVPR:2021,
  title = {{AGORA}: Avatars in Geography Optimized for Regression Analysis},
  author = {Patel, Priyanka and Huang, Chun-Hao P. and Tesch, Joachim and Hoffmann, David T. and Tripathi, Shashank and Black, Michael J.},
  booktitle = {Proceedings IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2021},
  month_numeric = {6}
}

Contact

For commercial licensing (and all related questions for business applications), please contact [email protected].

Owner
Mohamed Hassan
Mohamed Hassan
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022
This is the first released system towards complex meters` detection and recognition, which is implemented by computer vision techniques.

A three-stage detection and recognition pipeline of complex meters in wild This is the first released system towards detection and recognition of comp

Yan Shu 19 Nov 28, 2022
Face recognition with trained classifiers for detecting objects using OpenCV

Face_Detector Face recognition with trained classifiers for detecting objects using OpenCV Libraries required to be installed using pip Command: cv2 n

Chumui Tripura 0 Oct 31, 2021
Learning and Building Convolutional Neural Networks using PyTorch

Image Classification Using Deep Learning Learning and Building Convolutional Neural Networks using PyTorch. Models, selected are based on number of ci

Mayur 126 Dec 22, 2022
ZeroVL - The official implementation of ZeroVL

This repository contains source code necessary to reproduce the results presente

31 Nov 04, 2022
A medical imaging framework for Pytorch

Welcome to MedicalTorch MedicalTorch is an open-source framework for PyTorch, implementing an extensive set of loaders, pre-processors and datasets fo

Christian S. Perone 799 Jan 03, 2023
This repo contains the pytorch implementation for Dynamic Concept Learner (accepted by ICLR 2021).

DCL-PyTorch Pytorch implementation for the Dynamic Concept Learner (DCL). More details can be found at the project page. Framework Grounding Physical

Zhenfang Chen 31 Jan 06, 2023
An end-to-end image translation model with weight-map for color constancy

CCUnet An end-to-end image translation model with weight-map for color constancy 1. Download the dataset (take Colorchecker_recommended dataset as an

Jianhui Qiu 1 Dec 21, 2021
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 04, 2022
Advanced yabai wooting scripts

Yabai Wooting scripts Installation requirements Both https://github.com/xiamaz/python-yabai-client and https://github.com/xiamaz/python-wooting-rgb ne

Max Zhao 3 Dec 31, 2021
Reinforcement learning models in ViZDoom environment

DoomNet DoomNet is a ViZDoom agent trained by reinforcement learning. The agent is a neural network that outputs a probability of actions given only p

Andrey Kolishchak 126 Dec 09, 2022
Introduction to CPM

CPM CPM is an open-source program on large-scale pre-trained models, which is conducted by Beijing Academy of Artificial Intelligence and Tsinghua Uni

Tsinghua AI 136 Dec 23, 2022
CHERRY is a python library for predicting the interactions between viral and prokaryotic genomes

CHERRY is a python library for predicting the interactions between viral and prokaryotic genomes. CHERRY is based on a deep learning model, which consists of a graph convolutional encoder and a link

Kenneth Shang 12 Dec 15, 2022
Deep Multi-Magnification Network for multi-class tissue segmentation of whole slide images

Deep Multi-Magnification Network This repository provides training and inference codes for Deep Multi-Magnification Network published here. Deep Multi

Computational Pathology 12 Aug 06, 2022
Optimizing DR with hard negatives and achieving SOTA first-stage retrieval performance on TREC DL Track (SIGIR 2021 Full Paper).

Optimizing Dense Retrieval Model Training with Hard Negatives Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma This repo provi

Jingtao Zhan 99 Dec 27, 2022
Curriculum Domain Adaptation for Semantic Segmentation of Urban Scenes, ICCV 2017

AdaptationSeg This is the Python reference implementation of AdaptionSeg proposed in "Curriculum Domain Adaptation for Semantic Segmentation of Urban

Yang Zhang 128 Oct 19, 2022
A flag generation AI created using DeepAIs API

Vex AI or Vexiology AI is an Artifical Intelligence created to generate custom made flag design texts. It uses DeepAIs API. Please be aware that you must include your own DeepAI API key. See instruct

Bernie 10 Apr 06, 2022
🏖 Keras Implementation of Painting outside the box

Keras implementation of Image OutPainting This is an implementation of Painting Outside the Box: Image Outpainting paper from Standford University. So

Bendang 1.1k Dec 10, 2022
The pytorch implementation of SOKD (BMVC2021).

Semi-Online Knowledge Distillation Implementations of SOKD. Requirements This repo was tested with Python 3.8, PyTorch 1.5.1, torchvision 0.6.1, CUDA

4 Dec 19, 2021
Official implementation of Long-Short Transformer in PyTorch.

Long-Short Transformer (Transformer-LS) This repository hosts the code and models for the paper: Long-Short Transformer: Efficient Transformers for La

NVIDIA Corporation 198 Dec 29, 2022