CPF: Learning a Contact Potential Field to Model the Hand-object Interaction

Related tags

Deep LearningCPF
Overview

Contact Potential Field

This repo contains model, demo, and test codes of our paper: CPF: Learning a Contact Potential Field to Model the Hand-object Interaction

Guide to the Demo

1. Get our code:

$ git clone --recursive https://github.com/lixiny/CPF.git
$ cd CPF

2. Set up your new environment:

$ conda env create -f environment.yaml
$ conda activate cpf

3. Download assets files and put it in assets folder.

Download the MANO model files from official MANO website, and put it into assets/mano. We currently only use the MANO_RIGHT.pkl

Now your assets folder should look like this:

.
├── anchor/
│   ├── anchor_mapping_path.pkl
│   ├── anchor_weight.txt
│   ├── face_vertex_idx.txt
│   └── merged_vertex_assignment.txt
├── closed_hand/
│   └── hand_mesh_close.obj
├── fhbhands_fits/
│   ├── Subject_1/
│   │   ├── ...
│   ├── Subject_2/
|   ├── ...
├── hand_palm_full.txt
└── mano/
    ├── fhb_skel_centeridx9.pkl
    ├── info.txt
    ├── LICENSE.txt
    └── MANO_RIGHT.pkl

4. Download Dataset

First-Person Hand Action Benchmark (fhb)

Download and unzip the First-Person Hand Action Benchmark dataset following the official instructions to the data/fhbhands folder If everything is correct, your data/fhbhands should look like this:

.
├── action_object_info.txt
├── action_sequences_normalized/
├── change_log.txt
├── data_split_action_recognition.txt
├── file_system.jpg
├── Hand_pose_annotation_v1/
├── Object_6D_pose_annotation_v1_1/
├── Object_models/
├── Subjects_info/
├── Video_files/
├── Video_files_480/ # Optionally

Optionally, resize the images (speeds up training !) based on the handobjectconsist/reduce_fphab.py.

$ python reduce_fphab.py

Download our fhbhands_supp and place it at data/fhbhands_supp:

Download our fhbhands_example and place it at data/fhbhands_example. This fhbhands_example contains 10 samples that are designed to demonstrate our pipeline.

├── fhbhands/
├── fhbhands_supp/
│   ├── Object_models/
│   └── Object_models_binvox/
├── fhbhands_example/
│   ├── annotations/
│   ├── images/
│   ├── object_models/
│   └── sample_list.txt

HO3D

Download and unzip the HO3D dataset following the official instructions to the data/HO3D folder. if everything is correct, the HO3D & YCB folder in your data should look like this:

data/
├── HO3D/
│   ├── evaluation/
│   ├── evaluation.txt
│   ├── train/
│   └── train.txt
├── YCB_models/
│   ├── 002_master_chef_can/
│   ├── ...

Download our YCB_models_supp and place it at data/YCB_models_supp

Now the data folder should have a root structure like:

data/
├── fhbhands/
├── fhbhands_supp/
├── fhbhands_example/
├── HO3D/
├── YCB_models/
├── YCB_models_supp/

5. Download pre-trained checkpoints

download our pre-trained CPF_checkpoints, unzip it at the CPF_checkpoints folder:

CPF_checkpoints/
├── honet/
│   ├── fhb/
│   ├── ho3dofficial/
│   └── ho3dv1/
├── picr/
│   ├── fhb/
│   ├── ho3dofficial/
│   └── ho3dv1/

6. Launch visualization

We create a FHBExample dataset in hocontact/hodatasets/fhb_example.py that only contains 10 samples to demonstrate our pipeline. Notice: this demo requires active screen for visualizing. Press q in the "runtime hand" window to start fitting.

$ python training/run_demo.py \
    --gpu 0 \
    --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar \
    --honet_mano_fhb_hand

7. Test on full dataset (FHB, HO3D v1/v2)

We provide shell srcipts to test on the full dataset to approximately reproduce our results.

FHB

dump the results of HoNet and PiCR:

$ python training/dumppicr_dist.py \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12355 \
    --exp_keyword fhb \
    --train_datasets fhb \
    --train_splits train \
    --val_dataset fhb \
    --val_split test \
    --split_mode actions \
    --batch_size 8 \
    --dump_eval \
    --dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr \
    --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar

and reload the GeO optimizer:

# setting 1: hand-only
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python training/optimize.py \
    --n_workers 16 \
    --data_path common/picr/fhbhands/test_actions_mf1.0_rf0.25_fct5.0_ec \
    --mode hand

# setting 2: hand-obj
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python training/optimize.py \
    --n_workers 16 \
    --data_path common/picr/fhbhands/test_actions_mf1.0_rf0.25_fct5.0_ec \
    --mode hand_obj \
    --compensate_tsl

HO3Dv1

dump:

$ python training/dumppicr_dist.py  \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12356 \
    --exp_keyword ho3dv1 \
    --train_datasets ho3d \
    --train_splits train \
    --val_dataset ho3d \
    --val_split test \
    --split_mode objects \
    --batch_size 4 \
    --dump_eval \
    --dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr_ho3dv1 \
    --init_ckpt CPF_checkpoints/picr/ho3dv1/checkpoint_300.pth.tar

and reload optimizer:

# hand-only
$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dv1/HO3D/test_objects_mf1_likev1_fct5.0_ec/ \
    --lr 1e-2 \
    --n_iter 500 \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 4.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand

# hand-obj
$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dv1/HO3D/test_objects_mf1_likev1_fct5.0_ec/ \
    --lr 1e-2 \
    --n_iter 500  \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 6.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand_obj

HO3Dofficial

dump:

$ python training/dumppicr_dist.py  \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12356 \
    --exp_keyword ho3dofficial \
    --train_datasets ho3d \
    --train_splits val \
    --val_dataset ho3d \
    --val_split test \
    --split_mode official \
    --batch_size 4 \
    --dump_eval \
    --dump \
    --test_dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr_ho3dofficial \
    --init_ckpt CPF_checkpoints/picr/ho3dofficial/checkpoint_300.pth.tar

and reload optimizer:

$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dofficial/HO3D/test_official_mf1_likev1_fct\(x\)_ec/  \
    --lr 1e-2 \
    --n_iter 500 \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 2.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand_obj

Results

Testing on the full dataset may take a while ( 0.5 ~ 1.5 day ), thus we also provide our test results at fitting_res.txt.

K-MANO

We provide pytorch implementation of our Kinematic-chained MANO in lixiny/manopth, which is modified from the original hassony2/manopth. Thank Yana Hasson for providing the code.

Citation

If you find this work helpful, please consider citing us:

@article{yang2020cpf,
  title={CPF: Learning a Contact Potential Field to Model the Hand-object Interaction},
  author={Yang, Lixin and Zhan, Xinyu and Li, Kailin and Xu, Wenqiang and Li, Jiefeng and Lu, Cewu},
  journal={arXiv preprint arXiv:2012.00924},
  year={2020}
}

And if you have any question or suggestion, do not hesitate to contact me through siriusyang[at]sjtu[dot]edu[dot]cn.

Comments
  • FileNotFoundError: [Errno 2] No such file or directory: 'assets/mano/MANO_RIGHT.pkl'

    FileNotFoundError: [Errno 2] No such file or directory: 'assets/mano/MANO_RIGHT.pkl'

    I executed this command: python training/run_demo.py --gpu 0 --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar --honet_mano_fhb_hand

    image

    So, I moved assets/mano folder to the path CPF/manopth/mano/webuser/ But, I am still getting the error

    opened by anjugopinath 3
  •  AttributeError: 'ParsedRequirement' object has no attribute 'req'

    AttributeError: 'ParsedRequirement' object has no attribute 'req'

    Could you tell me which version of Anaconda to use please? I am getting the below error:

    neptune:/s/red/a/nobackup/vision/anju/CPF$ conda env create -f environment.yaml Collecting package metadata (repodata.json): done Solving environment: done

    ==> WARNING: A newer version of conda exists. <== current version: 4.9.2 latest version: 4.10.1

    Please update conda by running

    $ conda update -n base -c defaults conda
    

    Preparing transaction: done Verifying transaction: done Executing transaction: done Installing pip dependencies: | Ran pip subprocess with arguments: ['/s/chopin/a/grad/anju/.conda/envs/cpf/bin/python', '-m', 'pip', 'install', '-U', '-r', '/s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt'] Pip subprocess output: Collecting git+https://github.com/utiasSTARS/liegroups.git (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 1)) Cloning https://github.com/utiasSTARS/liegroups.git to /tmp/pip-req-build-ey_prxpa Obtaining file:///s/red/a/nobackup/vision/anju/CPF/manopth (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 12)) Obtaining file:///s/red/a/nobackup/vision/anju/CPF (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 13)) Collecting trimesh==3.8.10 Using cached trimesh-3.8.10-py3-none-any.whl (625 kB) Collecting open3d==0.10.0.0 Using cached open3d-0.10.0.0-cp38-cp38-manylinux1_x86_64.whl (4.7 MB) Collecting pyrender==0.1.43 Using cached pyrender-0.1.43-py3-none-any.whl (1.2 MB) Collecting scikit-learn==0.23.2 Using cached scikit_learn-0.23.2-cp38-cp38-manylinux1_x86_64.whl (6.8 MB) Collecting chumpy==0.69 Using cached chumpy-0.69.tar.gz (50 kB)

    Pip subprocess error: Running command git clone -q https://github.com/utiasSTARS/liegroups.git /tmp/pip-req-build-ey_prxpa ERROR: Command errored out with exit status 1: command: /s/chopin/a/grad/anju/.conda/envs/cpf/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-hnf78qhk/chumpy/setup.py'"'"'; file='"'"'/tmp/pip-install-hnf78qhk/chumpy/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-k7bp5gq7 cwd: /tmp/pip-install-hnf78qhk/chumpy/ Complete output (7 lines): Traceback (most recent call last): File "", line 1, in File "/tmp/pip-install-hnf78qhk/chumpy/setup.py", line 15, in install_requires = [str(ir.req) for ir in install_reqs] File "/tmp/pip-install-hnf78qhk/chumpy/setup.py", line 15, in install_requires = [str(ir.req) for ir in install_reqs] AttributeError: 'ParsedRequirement' object has no attribute 'req' ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

    failed

    CondaEnvException: Pip failed

    opened by anjugopinath 3
  • How to use CPF on both hands?

    How to use CPF on both hands?

    Thanks a lot for your great work! I have a question: Since you only use the MANO_RIGHT.pkl, it seems that CPF currently can only construct right hand model, right? What is needed to be modified to use CPF on both hands? Thanks!

    opened by buaacyw 3
  • Error when executing command

    Error when executing command "conda env create -f environment.yaml"

    Hi,

    I get the below error when executing the command "conda env create -f environment.yaml"

    CondaError: Downloaded bytes did not match Content-Length url: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/linux-64/pytorch-1.6.0-py3.8_cuda10.2.89_cudnn7.6.5_0.tar.bz2 target_path: /home/anju/anaconda3/pkgs/pytorch-1.6.0-py3.8_cuda10.2.89_cudnn7.6.5_0.tar.bz2 Content-Length: 564734769 downloaded bytes: 221675180

    opened by anjugopinath 1
  • Some questions about PiQR code

    Some questions about PiQR code

    In the contacthead.py, the three decoders have different input dimension. self.vertex_contact_decoder = PointNetDecodeModule(self._concat_feat_dim, 1) self.contact_region_decoder = PointNetDecodeModule(self._concat_feat_dim + 1, self.n_region) self.anchor_elasti_decoder = PointNetDecodeModule(self._concat_feat_dim + 17, self.n_anchor)

    I am wondering if this part is used to predict selected anchor points within each subregion.

    The classification of subregions is obtained by contact_region_decoder and then the anchor points are predicted by anchor_elasti_decoder, is it right ?

    I am a little bit confused about it, because according to the paper, Anchor Elasticity (AE) represents the elasticities of the attractive springs. But in the code, the output of anchor_elasti_decoder has no relation to the elasticity parameter, I'm wondering if there's some part I've missed.

    Sorry for any trouble caused and thanks for your help!

    opened by lym29 0
  • what's the meaning of

    what's the meaning of "adapt"?

    I notice that there are hand_pose_axisang_adapt_np and hand_pose_axisang_np in your code. Could you please explain what's the difference between them?

    opened by Yamato-01 5
  • Expected code date ?

    Expected code date ?

    Hi !

    I just read through your paper, congratulation on the great work ! I love the fact that you provide an anatomically-constrained MANO, and the per-object-vertex hand part affinity.

    I look forward to the code realease :)

    Do you have a planned date in mind ?

    All the best,

    Yana

    opened by hassony2 4
Releases(v1.0.0)
Owner
Lixin YANG
PhD student @ SJTU. Computer Vision, Robotic Vision and Hand-obj Interaction
Lixin YANG
A library that allows for inference on probabilistic models

Bean Machine Overview Bean Machine is a probabilistic programming language for inference over statistical models written in the Python language using

Meta Research 234 Dec 29, 2022
A python library for highly configurable transformers - easing model architecture search and experimentation.

A python library for highly configurable transformers - easing model architecture search and experimentation.

Anthony Fuller 51 Nov 20, 2022
Overview of architecture and implementation of TEDS-Net, as described in MICCAI 2021: "TEDS-Net: Enforcing Diffeomorphisms in Spatial Transformers to Guarantee TopologyPreservation in Segmentations"

TEDS-Net Overview of architecture and implementation of TEDS-Net, as described in MICCAI 2021: "TEDS-Net: Enforcing Diffeomorphisms in Spatial Transfo

Madeleine K Wyburd 14 Jan 04, 2023
Improving XGBoost survival analysis with embeddings and debiased estimators

xgbse: XGBoost Survival Embeddings "There are two cultures in the use of statistical modeling to reach conclusions from data

Loft 242 Dec 30, 2022
Code for the upcoming CVPR 2021 paper

The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth Jamie Watson, Oisin Mac Aodha, Victor Prisacariu, Gabriel J. Brostow and Michael

Niantic Labs 496 Dec 30, 2022
official Pytorch implementation of ICCV 2021 paper FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu

77 Dec 27, 2022
[CoRL 2021] A robotics benchmark for cross-embodiment imitation.

x-magical x-magical is a benchmark extension of MAGICAL specifically geared towards cross-embodiment imitation. The tasks still provide the Demo/Test

Kevin Zakka 36 Nov 26, 2022
Pytorch ImageNet1k Loader with Bounding Boxes.

ImageNet 1K Bounding Boxes For some experiments, you might wanna pass only the background of imagenet images vs passing only the foreground. Here, I'v

Amin Ghiasi 11 Oct 15, 2022
Code I use to automatically update my videos' metadata on YouTube

mCodingYouTube This repository contains the code I use to automatically update my videos' metadata on YouTube, including: titles, descriptions, tags,

James Murphy 19 Oct 07, 2022
This repository is a basic Machine Learning train & validation Template (Using PyTorch)

pytorch_ml_template This repository is a basic Machine Learning train & validation Template (Using PyTorch) TODO Markdown 사용법 Build Docker 사용법 Anacond

1 Sep 15, 2022
The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction".

LEAR The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction". **The code is in the "master

杨攀 93 Jan 07, 2023
This is just a funny project that we want to see AutoEncoder (AE) can actually work to enhance the features we want

Funny_muscle_enhancer :) 1.Discription: This is just a funny project that we want to see AutoEncoder (AE) can actually work on the some features. We w

Jing-Yao Chen (Jacob) 8 Oct 01, 2022
An implementation of chunked, compressed, N-dimensional arrays for Python.

Zarr Latest Release Package Status License Build Status Coverage Downloads Gitter Citation What is it? Zarr is a Python package providing an implement

Zarr Developers 1.1k Dec 30, 2022
PyArmadillo: an alternative approach to linear algebra in Python

PyArmadillo is a linear algebra library for the Python language, with an emphasis on ease of use.

Terry Zhuo 58 Oct 11, 2022
Exadel CompreFace is a free and open-source face recognition GitHub project

Exadel CompreFace is a leading free and open-source face recognition system Exadel CompreFace is a free and open-source face recognition service that

Exadel 2.6k Jan 04, 2023
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano https:

9.6k Jan 06, 2023
Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

TVT Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation Datasets: Digit: MNIST, SVHN, USPS Object: Office, Office-Home, Vi

37 Dec 15, 2022
PyTorch implementation of the ExORL: Exploratory Data for Offline Reinforcement Learning

ExORL: Exploratory Data for Offline Reinforcement Learning This is an original PyTorch implementation of the ExORL framework from Don't Change the Alg

Denis Yarats 52 Jan 01, 2023
Hyperbolic Hierarchical Clustering.

Hyperbolic Hierarchical Clustering (HypHC) This code is the official PyTorch implementation of the NeurIPS 2020 paper: From Trees to Continuous Embedd

HazyResearch 154 Dec 15, 2022
A Marvelous ChatBot implement using PyTorch.

PyTorch Marvelous ChatBot [Update] it's 2019 now, previously model can not catch up state-of-art now. So we just move towards the future a transformer

JinTian 223 Oct 18, 2022