A simple software for capturing human body movements using the Kinect camera.

Overview

KinectMotionCapture

Build Status DOI

A simple software for capturing human body movements using the Kinect camera. The software can seamlessly save joints and bones positions for further analysis.

Features

  • Compliance with one Kinect camera connected.
  • Tracking up to two people captured on the video stream.
  • Indicating by color which joints and bones are fully tracked or inferred.
  • Recording body movements of one person. Bones and joints positions data is saved to files.
  • Adjusting joint filtering options.
  • Saving screenshots of current video stream.

The application recognizes only joints and bones that were fully tracked or inferred by the Kinect camera. Tracked joints and bones with both of their joints tracked are indicated by green color, inferred joints and bones with only one of their joints tracked are indicated by yellow color, and bones with both of their joint inferred are indicated by red color.

Although being capable of tracking up to two skeletons on the video stream, the application saves all positions as if there was only one skeleton source. Therefore, if more than one skeleton is tracked, the user should indicate the main body to be captured by using the Set body function.

For a general overview of the Kinect skeletal tracking system please refer to [1].

Functions

Set body

The Set body function allows to choose the body to be captured (and its position saved) from all other bodies present on the video stream. To use this feature, the person to be captured must stand the closest to the camera, and then the Set body button must be clicked.

Kinect smoothing parameters

The skeletal tracking joint information can be adjusted across different frames to minimize jittering and stabilize the joint positions over time. This can be done by adjusting the smoothing parameters. A comprehensive description of these options can be found at [1].

Recording

Body movement can be recorded by clicking the Start recoding button. All data recorded is saved as comma-separated files in “data” folder in the root directory of the application. For the data file to be saved the Stop recording button must be clicked afterwards. Joints positions are saved as files named “<>-joint-<>.csv”. The files include data columns which contain timestamp of a measurement (timestamp), joint x position (x), joint y position (y), joint z position (z), and coordinate type (coord_type), which indicates whether the joint was fully tracked (1) or inferred (2). Bones positions are saved as files named “<>-bone-<>-<>.csv”. The files include data columns which contain timestamp of a measurement (timestamp), bone absolute rotation matrix (abs_m11 to abs_m44), bone absolute orientation in quaternion form (abs_x, abs_y, abs_z, and abs_w), bone hierarchical rotation matrix (h_m11 to h_m44), bone hierarchical orientation in quaternion form (h_x, h_y, h_z, and h_w), and coordinate type (coord_type), which indicates whether both joints of the bone were fully tracked (1), both were inferred (2) or only one of them was tracked (3).

Requirements

  • .NET Framework 4.5.2
  • Kinect for Windows SDK v1.8

References

[1] https://msdn.microsoft.com/en-us/library/hh973074.aspx

You might also like...
 SMPL-X: A new joint 3D model of the human body, face and hands together
SMPL-X: A new joint 3D model of the human body, face and hands together

SMPL-X: A new joint 3D model of the human body, face and hands together [Paper Page] [Paper] [Supp. Mat.] Table of Contents License Description News I

Face and Pose detector that emits MQTT events when a face or human body is detected and not detected.
Face and Pose detector that emits MQTT events when a face or human body is detected and not detected.

Face Detect MQTT Face or Pose detector that emits MQTT events when a face or human body is detected and not detected. I built this as an alternative t

Camera-caps - Examine the camera capabilities for V4l2 cameras
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

 PoseViz – Multi-person, multi-camera 3D human pose visualization tool built using Mayavi.
PoseViz – Multi-person, multi-camera 3D human pose visualization tool built using Mayavi.

PoseViz – 3D Human Pose Visualizer Multi-person, multi-camera 3D human pose visualization tool built using Mayavi. As used in MeTRAbs visualizations.

CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images
CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images

CFC-Net This project hosts the official implementation for the paper: CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Dete

Towards Multi-Camera 3D Human Pose Estimation in Wild Environment
Towards Multi-Camera 3D Human Pose Estimation in Wild Environment

PanopticStudio Toolbox This repository has a toolbox to download, process, and visualize the Panoptic Studio (Panoptic) data. Note: Sep-21-2020: Curre

Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

[CVPR2021] UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles

UAV-Human Official repository for CVPR2021: UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicle Paper arXiv Res

Releases(v1.1)
Owner
Aleksander Palkowski
Aleksander Palkowski
The code for the NeurIPS 2021 paper "A Unified View of cGANs with and without Classifiers".

Energy-based Conditional Generative Adversarial Network (ECGAN) This is the code for the NeurIPS 2021 paper "A Unified View of cGANs with and without

sianchen 22 May 28, 2022
MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images

Main repo for ECCV 2020 paper MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. visual.cs.brown.edu/matryodshka

Brown University Visual Computing Group 75 Dec 13, 2022
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition

RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition (PyTorch) Paper: https://arxiv.org/abs/2105.01883 Citation: @

260 Jan 03, 2023
3D dataset of humans Manipulating Objects in-the-Wild (MOW)

MOW dataset [Website] This repository maintains our 3D dataset of humans Manipulating Objects in-the-Wild (MOW). The dataset contains 512 images in th

Zhe Cao 28 Nov 06, 2022
Source code for our paper "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash"

Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash Abstract: Apple recently revealed its deep perceptual hashing system NeuralHash to

<a href=[email protected]"> 11 Dec 03, 2022
Package for extracting emotions from social media text. Tailored for financial data.

EmTract: Extracting Emotions from Social Media Text Tailored for Financial Contexts EmTract is a tool that extracts emotions from social media text. I

13 Nov 17, 2022
Learning Efficient Online 3D Bin Packing on Packing Configuration Trees

Learning Efficient Online 3D Bin Packing on Packing Configuration Trees This repository is being continuously updated, please stay tuned! Any code con

86 Dec 28, 2022
UNION: An Unreferenced Metric for Evaluating Open-ended Story Generation

UNION Automatic Evaluation Metric described in the paper UNION: An UNreferenced MetrIc for Evaluating Open-eNded Story Generation (EMNLP 2020). Please

50 Dec 30, 2022
An open source machine learning library for performing regression tasks using RVM technique.

Introduction neonrvm is an open source machine learning library for performing regression tasks using RVM technique. It is written in C programming la

Siavash Eliasi 33 May 31, 2022
This repository contains the code for TABS, a 3D CNN-Transformer hybrid automated brain tissue segmentation algorithm using T1w structural MRI scans

This repository contains the code for TABS, a 3D CNN-Transformer hybrid automated brain tissue segmentation algorithm using T1w structural MRI scans. TABS relies on a Res-Unet backbone, with a Vision

6 Nov 07, 2022
Learning to Reconstruct 3D Manhattan Wireframes from a Single Image

Learning to Reconstruct 3D Manhattan Wireframes From a Single Image This repository contains the PyTorch implementation of the paper: Yichao Zhou, Hao

Yichao Zhou 50 Dec 27, 2022
Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System

News! Aug 2020: v0.4.0 version of AlphaPose is released! Stronger tracking! Include whole body(face,hand,foot) keypoints! Colab now available. Dec 201

Machine Vision and Intelligence Group @ SJTU 6.7k Dec 28, 2022
A Keras implementation of YOLOv4 (Tensorflow backend)

keras-yolo4 请使用更完善的版本: https://github.com/miemie2013/Keras-YOLOv4 Please visit here for more complete model: https://github.com/miemie2013/Keras-YOLOv

384 Nov 29, 2022
A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen.

Master Release Pytorch - Py + Nim A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen. Because Nim compiles to C+

Giovanni Petrantoni 425 Dec 22, 2022
Implementation of "Deep Implicit Templates for 3D Shape Representation"

Deep Implicit Templates for 3D Shape Representation Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu. arXiv 2020. This repository is an implementation fo

Zerong Zheng 144 Dec 07, 2022
Embeddinghub is a database built for machine learning embeddings.

Embeddinghub is a database built for machine learning embeddings.

Featureform 1.2k Jan 01, 2023
Materials for my scikit-learn tutorial

Scikit-learn Tutorial Jake VanderPlas email: [email protected] twitter: @jakevdp gith

Jake Vanderplas 1.6k Dec 30, 2022
PyTorch implementation of the Transformer in Post-LN (Post-LayerNorm) and Pre-LN (Pre-LayerNorm).

Transformer-PyTorch A PyTorch implementation of the Transformer from the paper Attention is All You Need in both Post-LN (Post-LayerNorm) and Pre-LN (

Jared Wang 22 Feb 27, 2022
Namish Khanna 40 Oct 11, 2022