CPF: Learning a Contact Potential Field to Model the Hand-object Interaction

Related tags

Deep LearningCPF
Overview

Contact Potential Field

This repo contains model, demo, and test codes of our paper: CPF: Learning a Contact Potential Field to Model the Hand-object Interaction

Guide to the Demo

1. Get our code:

$ git clone --recursive https://github.com/lixiny/CPF.git
$ cd CPF

2. Set up your new environment:

$ conda env create -f environment.yaml
$ conda activate cpf

3. Download assets files and put it in assets folder.

Download the MANO model files from official MANO website, and put it into assets/mano. We currently only use the MANO_RIGHT.pkl

Now your assets folder should look like this:

.
├── anchor/
│   ├── anchor_mapping_path.pkl
│   ├── anchor_weight.txt
│   ├── face_vertex_idx.txt
│   └── merged_vertex_assignment.txt
├── closed_hand/
│   └── hand_mesh_close.obj
├── fhbhands_fits/
│   ├── Subject_1/
│   │   ├── ...
│   ├── Subject_2/
|   ├── ...
├── hand_palm_full.txt
└── mano/
    ├── fhb_skel_centeridx9.pkl
    ├── info.txt
    ├── LICENSE.txt
    └── MANO_RIGHT.pkl

4. Download Dataset

First-Person Hand Action Benchmark (fhb)

Download and unzip the First-Person Hand Action Benchmark dataset following the official instructions to the data/fhbhands folder If everything is correct, your data/fhbhands should look like this:

.
├── action_object_info.txt
├── action_sequences_normalized/
├── change_log.txt
├── data_split_action_recognition.txt
├── file_system.jpg
├── Hand_pose_annotation_v1/
├── Object_6D_pose_annotation_v1_1/
├── Object_models/
├── Subjects_info/
├── Video_files/
├── Video_files_480/ # Optionally

Optionally, resize the images (speeds up training !) based on the handobjectconsist/reduce_fphab.py.

$ python reduce_fphab.py

Download our fhbhands_supp and place it at data/fhbhands_supp:

Download our fhbhands_example and place it at data/fhbhands_example. This fhbhands_example contains 10 samples that are designed to demonstrate our pipeline.

├── fhbhands/
├── fhbhands_supp/
│   ├── Object_models/
│   └── Object_models_binvox/
├── fhbhands_example/
│   ├── annotations/
│   ├── images/
│   ├── object_models/
│   └── sample_list.txt

HO3D

Download and unzip the HO3D dataset following the official instructions to the data/HO3D folder. if everything is correct, the HO3D & YCB folder in your data should look like this:

data/
├── HO3D/
│   ├── evaluation/
│   ├── evaluation.txt
│   ├── train/
│   └── train.txt
├── YCB_models/
│   ├── 002_master_chef_can/
│   ├── ...

Download our YCB_models_supp and place it at data/YCB_models_supp

Now the data folder should have a root structure like:

data/
├── fhbhands/
├── fhbhands_supp/
├── fhbhands_example/
├── HO3D/
├── YCB_models/
├── YCB_models_supp/

5. Download pre-trained checkpoints

download our pre-trained CPF_checkpoints, unzip it at the CPF_checkpoints folder:

CPF_checkpoints/
├── honet/
│   ├── fhb/
│   ├── ho3dofficial/
│   └── ho3dv1/
├── picr/
│   ├── fhb/
│   ├── ho3dofficial/
│   └── ho3dv1/

6. Launch visualization

We create a FHBExample dataset in hocontact/hodatasets/fhb_example.py that only contains 10 samples to demonstrate our pipeline. Notice: this demo requires active screen for visualizing. Press q in the "runtime hand" window to start fitting.

$ python training/run_demo.py \
    --gpu 0 \
    --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar \
    --honet_mano_fhb_hand

7. Test on full dataset (FHB, HO3D v1/v2)

We provide shell srcipts to test on the full dataset to approximately reproduce our results.

FHB

dump the results of HoNet and PiCR:

$ python training/dumppicr_dist.py \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12355 \
    --exp_keyword fhb \
    --train_datasets fhb \
    --train_splits train \
    --val_dataset fhb \
    --val_split test \
    --split_mode actions \
    --batch_size 8 \
    --dump_eval \
    --dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr \
    --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar

and reload the GeO optimizer:

# setting 1: hand-only
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python training/optimize.py \
    --n_workers 16 \
    --data_path common/picr/fhbhands/test_actions_mf1.0_rf0.25_fct5.0_ec \
    --mode hand

# setting 2: hand-obj
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python training/optimize.py \
    --n_workers 16 \
    --data_path common/picr/fhbhands/test_actions_mf1.0_rf0.25_fct5.0_ec \
    --mode hand_obj \
    --compensate_tsl

HO3Dv1

dump:

$ python training/dumppicr_dist.py  \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12356 \
    --exp_keyword ho3dv1 \
    --train_datasets ho3d \
    --train_splits train \
    --val_dataset ho3d \
    --val_split test \
    --split_mode objects \
    --batch_size 4 \
    --dump_eval \
    --dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr_ho3dv1 \
    --init_ckpt CPF_checkpoints/picr/ho3dv1/checkpoint_300.pth.tar

and reload optimizer:

# hand-only
$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dv1/HO3D/test_objects_mf1_likev1_fct5.0_ec/ \
    --lr 1e-2 \
    --n_iter 500 \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 4.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand

# hand-obj
$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dv1/HO3D/test_objects_mf1_likev1_fct5.0_ec/ \
    --lr 1e-2 \
    --n_iter 500  \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 6.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand_obj

HO3Dofficial

dump:

$ python training/dumppicr_dist.py  \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12356 \
    --exp_keyword ho3dofficial \
    --train_datasets ho3d \
    --train_splits val \
    --val_dataset ho3d \
    --val_split test \
    --split_mode official \
    --batch_size 4 \
    --dump_eval \
    --dump \
    --test_dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr_ho3dofficial \
    --init_ckpt CPF_checkpoints/picr/ho3dofficial/checkpoint_300.pth.tar

and reload optimizer:

$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dofficial/HO3D/test_official_mf1_likev1_fct\(x\)_ec/  \
    --lr 1e-2 \
    --n_iter 500 \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 2.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand_obj

Results

Testing on the full dataset may take a while ( 0.5 ~ 1.5 day ), thus we also provide our test results at fitting_res.txt.

K-MANO

We provide pytorch implementation of our Kinematic-chained MANO in lixiny/manopth, which is modified from the original hassony2/manopth. Thank Yana Hasson for providing the code.

Citation

If you find this work helpful, please consider citing us:

@article{yang2020cpf,
  title={CPF: Learning a Contact Potential Field to Model the Hand-object Interaction},
  author={Yang, Lixin and Zhan, Xinyu and Li, Kailin and Xu, Wenqiang and Li, Jiefeng and Lu, Cewu},
  journal={arXiv preprint arXiv:2012.00924},
  year={2020}
}

And if you have any question or suggestion, do not hesitate to contact me through siriusyang[at]sjtu[dot]edu[dot]cn.

Comments
  • FileNotFoundError: [Errno 2] No such file or directory: 'assets/mano/MANO_RIGHT.pkl'

    FileNotFoundError: [Errno 2] No such file or directory: 'assets/mano/MANO_RIGHT.pkl'

    I executed this command: python training/run_demo.py --gpu 0 --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar --honet_mano_fhb_hand

    image

    So, I moved assets/mano folder to the path CPF/manopth/mano/webuser/ But, I am still getting the error

    opened by anjugopinath 3
  •  AttributeError: 'ParsedRequirement' object has no attribute 'req'

    AttributeError: 'ParsedRequirement' object has no attribute 'req'

    Could you tell me which version of Anaconda to use please? I am getting the below error:

    neptune:/s/red/a/nobackup/vision/anju/CPF$ conda env create -f environment.yaml Collecting package metadata (repodata.json): done Solving environment: done

    ==> WARNING: A newer version of conda exists. <== current version: 4.9.2 latest version: 4.10.1

    Please update conda by running

    $ conda update -n base -c defaults conda
    

    Preparing transaction: done Verifying transaction: done Executing transaction: done Installing pip dependencies: | Ran pip subprocess with arguments: ['/s/chopin/a/grad/anju/.conda/envs/cpf/bin/python', '-m', 'pip', 'install', '-U', '-r', '/s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt'] Pip subprocess output: Collecting git+https://github.com/utiasSTARS/liegroups.git (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 1)) Cloning https://github.com/utiasSTARS/liegroups.git to /tmp/pip-req-build-ey_prxpa Obtaining file:///s/red/a/nobackup/vision/anju/CPF/manopth (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 12)) Obtaining file:///s/red/a/nobackup/vision/anju/CPF (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 13)) Collecting trimesh==3.8.10 Using cached trimesh-3.8.10-py3-none-any.whl (625 kB) Collecting open3d==0.10.0.0 Using cached open3d-0.10.0.0-cp38-cp38-manylinux1_x86_64.whl (4.7 MB) Collecting pyrender==0.1.43 Using cached pyrender-0.1.43-py3-none-any.whl (1.2 MB) Collecting scikit-learn==0.23.2 Using cached scikit_learn-0.23.2-cp38-cp38-manylinux1_x86_64.whl (6.8 MB) Collecting chumpy==0.69 Using cached chumpy-0.69.tar.gz (50 kB)

    Pip subprocess error: Running command git clone -q https://github.com/utiasSTARS/liegroups.git /tmp/pip-req-build-ey_prxpa ERROR: Command errored out with exit status 1: command: /s/chopin/a/grad/anju/.conda/envs/cpf/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-hnf78qhk/chumpy/setup.py'"'"'; file='"'"'/tmp/pip-install-hnf78qhk/chumpy/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-k7bp5gq7 cwd: /tmp/pip-install-hnf78qhk/chumpy/ Complete output (7 lines): Traceback (most recent call last): File "", line 1, in File "/tmp/pip-install-hnf78qhk/chumpy/setup.py", line 15, in install_requires = [str(ir.req) for ir in install_reqs] File "/tmp/pip-install-hnf78qhk/chumpy/setup.py", line 15, in install_requires = [str(ir.req) for ir in install_reqs] AttributeError: 'ParsedRequirement' object has no attribute 'req' ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

    failed

    CondaEnvException: Pip failed

    opened by anjugopinath 3
  • How to use CPF on both hands?

    How to use CPF on both hands?

    Thanks a lot for your great work! I have a question: Since you only use the MANO_RIGHT.pkl, it seems that CPF currently can only construct right hand model, right? What is needed to be modified to use CPF on both hands? Thanks!

    opened by buaacyw 3
  • Error when executing command

    Error when executing command "conda env create -f environment.yaml"

    Hi,

    I get the below error when executing the command "conda env create -f environment.yaml"

    CondaError: Downloaded bytes did not match Content-Length url: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/linux-64/pytorch-1.6.0-py3.8_cuda10.2.89_cudnn7.6.5_0.tar.bz2 target_path: /home/anju/anaconda3/pkgs/pytorch-1.6.0-py3.8_cuda10.2.89_cudnn7.6.5_0.tar.bz2 Content-Length: 564734769 downloaded bytes: 221675180

    opened by anjugopinath 1
  • Some questions about PiQR code

    Some questions about PiQR code

    In the contacthead.py, the three decoders have different input dimension. self.vertex_contact_decoder = PointNetDecodeModule(self._concat_feat_dim, 1) self.contact_region_decoder = PointNetDecodeModule(self._concat_feat_dim + 1, self.n_region) self.anchor_elasti_decoder = PointNetDecodeModule(self._concat_feat_dim + 17, self.n_anchor)

    I am wondering if this part is used to predict selected anchor points within each subregion.

    The classification of subregions is obtained by contact_region_decoder and then the anchor points are predicted by anchor_elasti_decoder, is it right ?

    I am a little bit confused about it, because according to the paper, Anchor Elasticity (AE) represents the elasticities of the attractive springs. But in the code, the output of anchor_elasti_decoder has no relation to the elasticity parameter, I'm wondering if there's some part I've missed.

    Sorry for any trouble caused and thanks for your help!

    opened by lym29 0
  • what's the meaning of

    what's the meaning of "adapt"?

    I notice that there are hand_pose_axisang_adapt_np and hand_pose_axisang_np in your code. Could you please explain what's the difference between them?

    opened by Yamato-01 5
  • Expected code date ?

    Expected code date ?

    Hi !

    I just read through your paper, congratulation on the great work ! I love the fact that you provide an anatomically-constrained MANO, and the per-object-vertex hand part affinity.

    I look forward to the code realease :)

    Do you have a planned date in mind ?

    All the best,

    Yana

    opened by hassony2 4
Releases(v1.0.0)
Owner
Lixin YANG
PhD student @ SJTU. Computer Vision, Robotic Vision and Hand-obj Interaction
Lixin YANG
Pytorch implementation of CoCon: A Self-Supervised Approach for Controlled Text Generation

COCON_ICLR2021 This is our Pytorch implementation of COCON. CoCon: A Self-Supervised Approach for Controlled Text Generation (ICLR 2021) Alvin Chan, Y

alvinchangw 79 Dec 18, 2022
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

demonsjin 58 Dec 06, 2022
BEAMetrics: Benchmark to Evaluate Automatic Metrics in Natural Language Generation

BEAMetrics: Benchmark to Evaluate Automatic Metrics in Natural Language Generation Installing The Dependencies $ conda create --name beametrics python

7 Jul 04, 2022
Just Randoms Cats with python

Random-Cat Just Randoms Cats with python.

OriCode 2 Dec 21, 2021
An implementation of Equivariant e2 convolutional kernals into a convolutional self attention network, applied to radio astronomy data.

EquivariantSelfAttention An implementation of Equivariant e2 convolutional kernals into a convolutional self attention network, applied to radio astro

2 Nov 09, 2021
Code and data for "Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning" (EMNLP 2021).

GD-VCR Code for Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning (EMNLP 2021). Research Questions and Aims: How well can a model perform o

Da Yin 24 Oct 13, 2022
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression Kullanacağımız veri seti Carnegie Mellon Üniversitesi'

AyseBuyukcelik 2 Jan 26, 2022
Demos of essentia classifiers hosted on replicate.ai

essentia-replicate-demos Demos of Essentia models hosted on replicate.ai's MTG site. The models Check our site for a complete list of the models avail

Music Technology Group - Universitat Pompeu Fabra 12 Nov 14, 2022
Large scale embeddings on a single machine.

Marius Marius is a system under active development for training embeddings for large-scale graphs on a single machine. Training on large scale graphs

Marius 107 Jan 03, 2023
Tensorflow python implementation of "Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos"

Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos This repository is the official tensorflow python implementation

Yasamin Jafarian 287 Jan 06, 2023
ThunderSVM: A Fast SVM Library on GPUs and CPUs

What's new We have recently released ThunderGBM, a fast GBDT and Random Forest library on GPUs. add scikit-learn interface, see here Overview The miss

Xtra Computing Group 1.4k Dec 22, 2022
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022
Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System

Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System The possibilities to involve

Babu Kumaran Nalini 0 Nov 19, 2021
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association" in PyTorch.

openpifpaf Continuously tested on Linux, MacOS and Windows: New 2021 paper: OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Te

VITA lab at EPFL 50 Dec 29, 2022
This computer program provides a reference implementation of Lagrangian Monte Carlo in metric induced by the Monge patch

This computer program provides a reference implementation of Lagrangian Monte Carlo in metric induced by the Monge patch. The code was prepared to the final version of the accepted manuscript in AIST

Marcelo Hartmann 2 May 06, 2022
Colab notebook and additional materials for Python-driven analysis of redlining data in Philadelphia

RedliningExploration The Google Colaboratory file contained in this repository contains work inspired by a project on educational inequality in the Ph

Benjamin Warren 1 Jan 20, 2022
An extremely simple, intuitive, hardware-friendly, and well-performing network structure for LiDAR semantic segmentation on 2D range image. IROS21

FIDNet_SemanticKITTI Motivation Implementing complicated network modules with only one or two points improvement on hardware is tedious. So here we pr

YimingZhao 54 Dec 12, 2022
This is a repository for a semantic segmentation inference API using the OpenVINO toolkit

BMW-IntelOpenVINO-Segmentation-Inference-API This is a repository for a semantic segmentation inference API using the OpenVINO toolkit. It's supported

BMW TechOffice MUNICH 34 Nov 24, 2022
StyleGAN2 Webtoon / Anime Style Toonify

StyleGAN2 Webtoon / Anime Style Toonify Korea Webtoon or Japanese Anime Character Stylegan2 base high Quality 1024x1024 / 512x512 Generate and Transfe

121 Dec 21, 2022
PyTorch implementation of paper “Unbiased Scene Graph Generation from Biased Training”

A new codebase for popular Scene Graph Generation methods (2020). Visualization & Scene Graph Extraction on custom images/datasets are provided. It's also a PyTorch implementation of paper “Unbiased

Kaihua Tang 824 Jan 03, 2023