A minimal solution to hand motion capture from a single color camera at over 100fps. Easy to use, plug to run.

Overview

Minimal Hand

A minimal solution to hand motion capture from a single color camera at over 100fps. Easy to use, plug to run.

teaser

This project provides the core components for hand motion capture:

  1. estimating joint locations from a monocular RGB image (DetNet)
  2. estimating joint rotations from locations (IKNet)

We focus on:

  1. ease of use (all you need is a webcam)
  2. time efficiency (on our 1080Ti, 8.9ms for DetNet, 0.9ms for IKNet)
  3. robustness to occlusion, hand-object interaction, fast motion, changing scale and view point

Some links: [video] [paper] [supp doc] [webpage]

The author is too busy to collect the training code for release. On the other hand, it should not be difficult to implement the training part. Feel free to open an issue for any encountered problems.

Pytorch Version

Here is a pytorch version implemented by @MengHao666. I didn't personally check it but I believe it worth trying. Many thanks to @MengHao666 !

With Unity

Here is a project that connects this repo to unity. It looks very cool and many thanks to @vinnik-dmitry07 !

Usage

Install dependencies

Please check requirements.txt. All dependencies are available via pip and conda.

Prepare MANO hand model

  1. Download MANO model from here and unzip it.
  2. In config.py, set OFFICIAL_MANO_PATH to the left hand model.
  3. Run python prepare_mano.py, you will get the converted MANO model that is compatible with this project at config.HAND_MESH_MODEL_PATH.

Prepare pre-trained network models

  1. Download models from here.
  2. Put detnet.ckpt.* in model/detnet, and iknet.ckpt.* in model/iknet.
  3. Check config.py, make sure all required files are there.

Run the demo for webcam input

  1. python app.py
  2. Put your right hand in front of the camera. The pre-trained model is for left hand, but the input would be flipped internally.
  3. Press ESC to quit.
  4. Although the model is robust to variant scales, most ideally the image should be 1.3x larger than the hand bounding box. A good bounding box may result in better accuracy. You can track the bounding box with the 2D predictions of the model.

We found that the model may fail on some "simple" poses. We think this is because such poses were no presented in the training data. We are working on a v2 version with further extended data to tackle this problem.

Use the models in your project

Please check wrappers.py.

IKNet Alternative

We also provide an optimization-based IK solver here.

Dataset

The detection model is trained with following datasets:

The IK model is trained with the poses shipped with MANO.

Citation

This is the official implementation of the paper "Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data" (CVPR 2020).

The quantitative numbers reported in the paper can be found in plot.py.

If you find the project helpful, please consider citing us:

@inproceedings{zhou2020monocular,
  title={Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data},
  author={Zhou, Yuxiao and Habermann, Marc and Xu, Weipeng and Habibie, Ikhsanul and Theobalt, Christian and Xu, Feng},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={0--0},
  year={2020}
}
Comments
  • About SMPL(mosh) label .

    About SMPL(mosh) label .

    Hello, Ask a question again. There is no mosh(SMPL theta and beta) in STB、RHD、FreiHand dataset etc. How to translate 3D keypoints to mesh(SMPL theta beta)? Hope your reply, thanks.

    opened by www516717402 13
  • how to use the Right-hand model

    how to use the Right-hand model

    In config.py, I set OFFICIAL_MANO_PATH to the right hand model and Run python prepare_mano.py.Then I can get the converted MANO model about right hand. But when I use the converted MANO model about right hand, the result is so bad. Where are the wrongs? Thus, I want to known how to use the MANO model about right hand OR how to convert the MANO model about right hand? looking forward to your reply! Thanks a lot.

    opened by huangfuts 12
  • Questions about training IKNet

    Questions about training IKNet

    Thank you for great project,I have a few questions about training IKNet

    1. When changing the original 16 rotations of MANO into 21 rotations, do W, T0, I0, M0, R0, and L0 share the rotation of W in the original MANO?
    2. I found the joints_xyz calculated by using MANO ref_pose and the transformed 21 rotation parameters using the method in hand_mesh.py is not equal to the 'J_transformed' saved in the MANO pkl file , the order of joints has been adjusted according to kinematics.py. When using the MANO dataset to train IKNet, how did you get the ground truth 3D joint annotation in Lxyz? Is the calculation method of FK (Q) the same as the calculation method of joint_xyz in hand_mesh.py
    opened by Gel-smile 9
  • How to mix and train the different datasets?

    How to mix and train the different datasets?

    Paper say that: DetNet is trained on 3 datasets: theCMU Panoptic Dataset (CMU) , the Rendered Hand-pose Dataset (RHD) and the GANerated Hands Dataset(GAN).

    Since the images of three datasets are different from each other, can u please tell me how to preprocess the image?

    opened by LyazS 8
  • How to get beta in IKNet?

    How to get beta in IKNet?

    You have done really a great work!

    When I read your paper about, I am a little confused about how to find the best beta in IKNet by minimizing E(beta). Is beta directly got by solving the function? Or using ML methods like Newton down-hill method?

    Thank you Best wishes

    opened by Mrsirovo 6
  • delta = delta * length为什么要进行相乘操作呢?

    delta = delta * length为什么要进行相乘操作呢?

    你好,在wrappers.py文件166行,使用delta = delta * length 作为ikmodel的输入之一 方便问一下这么做的原因吗? delta是关节归一化向量,乘以关节长度是为什么呢?

    我理解输入到后面ik模型的参数 需要有手部mesh模板参数,手部姿态参数sita(delta),skinning weights,手部坐标参数(xyz)

    所以就比较困惑只输入delta就可以了,为什么要相乘一下length。。。。

    opened by tonylin52 5
  • how to do

    how to do "global alignment"?

    Hi,I got confused about another problem.

    In your paper ,u said "As previous work, we perform a global alignment to better measure the local hand pose. " How do u implement the "global alignment"? Is it just to transalate the root joint to same location of label (Is the lable here is also root-relative and normalized using reference bone?) I got AUC of 0.1 only using DetNet retrained in RHD.

    Could u provide the "prevous work" that do a global alignment like u? And it would bebetter if their code has been public available. Thanks!

    opened by MengHao666 5
  • How can I use the model output quaternion to unity?

    How can I use the model output quaternion to unity?

    Thank you for your great work! I'm trying to use the model output to animate a virtual hand in unity, I tried to set the quaternion into unity's localrotation but it did not work. Could you share some insight about how I can achieve that?

    opened by wangtss 5
  • IK using 3D joint coordinates

    IK using 3D joint coordinates

    Hello, First, I would like to congrats you on the amazing paper. Moreover, I have a question regarding IK architecture. I would like to know if there is any comparison between the IK architecture that you proposed here and the other algorithm that you previously proposed based on Levenberg-Marquadr on Mano's hand. Additionally, could you guide me on applying the IK architecture without running the entire code as I have some ground truth 3d coordinates, and I want to obtain the IK parameters? Thanks a lot.

    opened by Amebradi 5
  • Obtaining MoCAP from a two hand video dataset

    Obtaining MoCAP from a two hand video dataset

    Greetings and many thanks for the great work.

    I wanted to utilize your code to extract MoCAP data given a first person RGB video dataset that has a clear view of both hands during a task. Given that your model is restricted to predicting from a single hand I wonder whether it will consistently show preference for the left hand if presented with videos that display both? If that's the case I suppose I could parse the dataset twice, flipping it the second time to obtain both hands' coordinates, right?

    opened by Linardos 5
  • Any plans on evaluating on FreiHAND dataset?

    Any plans on evaluating on FreiHAND dataset?

    I'm curious as it seems to be one of the better datasets publicly available, not only does it include really accurate 3D poses, but they are all on real images include challenging poses and object interactions. Along with all of this, it includes MANO hand shape ground truths. I would love to see how this model performs.

    It also allows for seeing how this performs without needing alignment since both camera intrinsics and scale are included for each image

    I'm also curious if this would be a good alternative for training IKNet instead of the MoCap data since it includes the hand shape ground truths. I'm not sure if I should open a separate issue for that to make it easier for others to find

    opened by pablovela5620 5
  • Keypoint representation as input to IKNet

    Keypoint representation as input to IKNet

    I am trying to use IKNet separately, starting from hand keypoints that have been extracted with MediaPipe. In order for this to work, I need to make sure that the Mediapipe hand coordinates are preprocessed in order to match the expected input format of IKNet (origin, scale, possibly rotation as well??).

    I ran into two questions here:

    1. I can see from your code that the keypoints have te be shifted to make 'M1' the origin. Bust what is the assumed scale? In the code you use IK_UNIT_LENGTH when rescaling from Mano reference keypoints, but it is not clear what this relates to or where it comes from. Also, is there an assumption on rotation of the hand (e.g. palm orientation)?

    2. I was assuming that the 'mpii_ref' keypoint set you pass as input to the IKNet would be some kind of "relaxed" reference hand (this is converted from the mano code base). When I plot it, however, the projection onto the xz plane matches this assumption, but the y coordinates look very strange, so I am assuming I am doing something wrong in interpreting this. Or maybe this incorporates some assumptions about the IKNet model input that I need to convert also to xyz keypoints input - since this seems to be passed as a reference hand? Could you clarify?

    Examples: (1) mpii_ref hand in front view (looking fine) mpii_ref_hand_xz (2) mpii_ref hand in rotated xyz view, showing unnaturally curved fingers and very long wrist-to-thumb connection mpii_ref_hand_xyz (3) For comparison: mediapipe hand in front view Mediapipe_hand_xz (4) For comparison: mediapipe hand in same xyz view as above Mediapipe_hand_xyz

    opened by jdambre 1
  • Project dependencies may have API risk issues

    Project dependencies may have API risk issues

    Hi, In minimal-hand, inappropriate dependency versioning constraints can cause risks.

    Below are the dependencies and version constraints that the project is using

    pygame==1.9.4
    open3d==0.9
    tensorflow_gpu==1.14.0
    transforms3d==0.3.1
    keyboard==0.13.4
    opencv_python==3.4.3.18
    numpy==1.18.1
    

    The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

    After further analysis, in this project, The version constraint of dependency keyboard can be changed to >=0.9.3,<=0.13.5. The version constraint of dependency numpy can be changed to >=1.8.0,<=1.23.0rc3.

    The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

    The invocation of the current project includes all the following methods.

    The calling methods from the keyboard
    keyboard.is_pressed
    
    The calling methods from the numpy
    numpy.linalg.norm
    
    The calling methods from the all methods
    pygame.init
    open3d.visualization.Visualizer.update_renderer
    tensorflow.pad
    pickle.load
    open3d.visualization.Visualizer.update_geometry
    tensorflow.layers.dense
    zero_padding
    detnet
    open3d.geometry.TriangleMesh
    wrappers.ModelPipeline
    self.ik_model.process
    tf_hmap_to_uv
    pygame.display.set_mode.blit
    open3d.visualization.Visualizer.create_window
    tensorflow.nn.relu
    cv2.VideoCapture
    pickle.load.toarray
    load_pkl
    lmaps.append
    numpy.maximum
    tensorflow.ConfigProto
    dmaps.append
    tensorflow.norm
    tensorflow.reshape
    tensorflow.contrib.layers.xavier_initializer
    self.cap.read
    matplotlib.pyplot.show
    tensorflow.cast
    numpy.matmul
    dense
    open3d.geometry.TriangleMesh.compute_triangle_normals
    tensorflow.layers.batch_normalization
    numpy.sum
    viewer.get_view_control.set_constant_z_far
    capture.read
    transforms3d.quaternions.quat2mat
    pygame.time.Clock
    viewer.get_view_control.convert_to_pinhole_camera_parameters
    numpy.expand_dims
    tensorflow.contrib.layers.l2_regularizer
    tensorflow.concat.get_shape
    str
    utils.OneEuroFilter
    xyz_to_delta
    self.compute_alpha
    self.dx_filter.process
    open3d.geometry.TriangleMesh.compute_vertex_normals
    matplotlib.pyplot.plot
    pickle.dump
    conv_bn
    pygame.time.Clock.tick
    tensorflow.expand_dims
    features.get_shape.as_list
    tensorflow.nn.max_pool2d
    keyboard.is_pressed
    tensorflow.name_scope
    frame_large.np.flip.copy
    data.items
    numpy.abs
    net_2d
    open3d.visualization.Visualizer
    len
    utils.OneEuroFilter.process
    dense_bn
    matplotlib.pyplot.legend
    tensorflow.gather_nd
    tensorflow.argmax
    LowPassFilter
    tensorflow.train.Saver
    pygame.surfarray.make_surface
    tensorflow.stack
    numpy.linalg.norm
    MANOHandJoints.labels.index
    viewer.get_view_control.convert_from_pinhole_camera_parameters
    tensorflow.nn.sigmoid
    matplotlib.pyplot.xlabel
    plot_pck
    inputs.get_shape
    calculate_auc
    numpy.linspace.reshape
    pygame.display.update
    tensorflow.train.Saver.restore
    tensorflow.concat
    open3d.utility.Vector3dVector
    bottleneck
    open3d.visualization.Visualizer.poll_events
    self.det_model.process
    tensorflow.initializers.truncated_normal
    open3d.visualization.Visualizer.get_view_control
    numpy.transpose
    int
    xyz.get_shape.as_list
    hand_mesh.HandMesh.set_abs_quat
    numpy.tile
    cam_params.intrinsic.set_intrinsics
    self.ref_T.append
    open3d.utility.Vector3iVector
    numpy.array
    viewer.get_render_option.load_from_json
    self.graph.as_default
    open3d.visualization.Visualizer.get_render_option
    tensorflow.shape
    get_pose_tile
    mano_to_mpii
    xyz.get_shape
    tensorflow.tile
    ModelIK
    tensorflow.layers.conv2d
    numpy.stack
    tensorflow.transpose
    tensorflow.Session
    frame.np.flip.copy
    pygame.display.set_mode
    MANOHandJoints.mesh_mapping.items
    cv2.resize
    open3d.geometry.TriangleMesh.paint_uniform_color
    transforms3d.axangles.axangle2mat
    hmaps.append
    net_3d
    range
    pygame.display.set_caption
    hand_mesh.HandMesh
    MPIIHandJoints.labels.index
    self.verts.copy
    self.x_filter.process
    tensorflow.Graph
    ModelDet
    numpy.linspace
    wrappers.ModelPipeline.process
    matplotlib.pyplot.grid
    tensorflow.variable_scope
    numpy.concatenate
    tensorflow.constant
    tensorflow.maximum
    self.ref_pose.append
    conv_bn_relu
    capture.OpenCVCapture
    live_application
    matplotlib.pyplot.ylabel
    matplotlib.pyplot.tight_layout
    open3d.visualization.Visualizer.add_geometry
    kinematics.mpii_to_mano
    utils.imresize
    tensorflow.placeholder
    cam_params.extrinsic.copy
    numpy.stack.append
    self.sess.run
    resnet50
    open
    numpy.flip
    tensorflow.where
    prepare_mano
    numpy.finfo
    network_fn
    numpy.zeros
    inputs.get_shape.as_list
    

    @developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.

    opened by PyDeps 0
  • 关于生成手部的shape

    关于生成手部的shape

    您好,非常感谢您能分享您的成果。 我在阅读您的代码和尝试的时候有几个问题想请教下 1.在代码里没有评估beta参数,所以生成的手部并不能保持原来手的形状对吗,比如手指长短,比例,厚度 2.生成的手部模型的大小是固定的,不会因为图片中手的大小而变化,是这样吗 3.代码的输入是视频如果只输入一张图片是否会让结果变差呢

    opened by ChaoYingYu 0
Owner
Yuxiao Zhou
Good luck, have fun.
Yuxiao Zhou
TransCD: Scene Change Detection via Transformer-based Architecture

TransCD: Scene Change Detection via Transformer-based Architecture

wangzhixue 29 Dec 11, 2022
Official implementation of the paper Do pedestrians pay attention? Eye contact detection for autonomous driving

Do pedestrians pay attention? Eye contact detection for autonomous driving Official implementation of the paper Do pedestrians pay attention? Eye cont

VITA lab at EPFL 26 Nov 02, 2022
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 04, 2022
A Large-Scale Dataset for Spinal Vertebrae Segmentation in Computed Tomography

A Large-Scale Dataset for Spinal Vertebrae Segmentation in Computed Tomography

ICT.MIRACLE lab 75 Dec 26, 2022
Using LSTM write Tang poetry

本教程将通过一个示例对LSTM进行介绍。通过搭建训练LSTM网络,我们将训练一个模型来生成唐诗。本文将对该实现进行详尽的解释,并阐明此模型的工作方式和原因。并不需要过多专业知识,但是可能需要新手花一些时间来理解的模型训练的实际情况。为了节省时间,请尽量选择GPU进行训练。

56 Dec 15, 2022
Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

ViLT Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision" Install pip install -r requirements.txt pip

Wonjae Kim 922 Jan 01, 2023
All supplementary material used by me while TA-ing CS3244: Machine Learning

CS3244-Tutorial-Material All supplementary material used by me while TA-ing CS3244: Machine Learning at NUS School of Computing. What is this? I teach

Rishabh Anand 18 Sep 23, 2022
Official PyTorch implementation for "Low Precision Decentralized Distributed Training with Heterogenous Data"

Low Precision Decentralized Training with Heterogenous Data Official PyTorch implementation for "Low Precision Decentralized Distributed Training with

Aparna Aketi 0 Nov 23, 2021
Logsig-RNN: a novel network for robust and efficient skeleton-based action recognition

GCN_LogsigRNN This repository holds the codebase for the paper: Logsig-RNN: a novel network for robust and efficient skeleton-based action recognition

7 Oct 14, 2022
RL-GAN: Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation

RL-GAN: Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation RL-GAN is an official implementation of the paper: T

42 Nov 10, 2022
PyTorch Implementation of the SuRP algorithm by the authors of the AISTATS 2022 paper "An Information-Theoretic Justification for Model Pruning"

PyTorch Implementation of the SuRP algorithm by the authors of the AISTATS 2022 paper "An Information-Theoretic Justification for Model Pruning".

Berivan Isik 8 Dec 08, 2022
This is an official implementation for "AS-MLP: An Axial Shifted MLP Architecture for Vision".

AS-MLP architecture for Image Classification Model Zoo Image Classification on ImageNet-1K Network Resolution Top-1 (%) Params FLOPs Throughput (image

SVIP Lab 106 Dec 12, 2022
[3DV 2021] A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks

dispersion-score Official implementation of 3DV 2021 Paper A Dataset-dispersion Perspective on Reconstruction versus Recognition in Single-view 3D Rec

Yefan 7 May 28, 2022
MoCoPnet - Deformable 3D Convolution for Video Super-Resolution

Deformable 3D Convolution for Video Super-Resolution Pytorch implementation of l

Xinyi Ying 28 Dec 15, 2022
Project NII pytorch scripts

project-NII-pytorch-scripts By Xin Wang, National Institute of Informatics, since 2021 I am a new pytorch user. If you have any suggestions or questio

Yamagishi and Echizen Laboratories, National Institute of Informatics 184 Dec 23, 2022
An example of semantic segmentation using tensorflow in eager execution.

Semantic segmentation using Tensorflow eager execution Requirement Python 2.7+ Tensorflow-gpu OpenCv H5py Scikit-learn Numpy Imgaug Train with eager e

Iñigo Alonso Ruiz 25 Sep 29, 2022
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

195 Dec 07, 2022
Computer Vision Script to recognize first person motion, developed as final project for the course "Machine Learning and Deep Learning"

Overview of The Code BaseColab/MLDL_FPAR.pdf: it contains the full explanation of our work Base Colab: it contains the base colab used to perform all

Simone Papicchio 4 Jul 16, 2022
Visual Adversarial Imitation Learning using Variational Models (VMAIL)

Visual Adversarial Imitation Learning using Variational Models (VMAIL) This is the official implementation of the NeurIPS 2021 paper. Project website

14 Nov 18, 2022
DumpSMBShare - A script to dump files and folders remotely from a Windows SMB share

DumpSMBShare A script to dump files and folders remotely from a Windows SMB shar

Podalirius 178 Jan 06, 2023