Data Preparation, Processing, and Visualization for MoVi Data

Overview

MoVi-Toolbox

Data Preparation, Processing, and Visualization for MoVi Data, https://www.biomotionlab.ca/movi/

MoVi is a large multipurpose dataset of human motion and video.

Here we provide tools and tutorials to use MoVi in your research projects. More specifically:

Table of Contents

Installation

Requirements

  • Python 3.*
  • MATLAB v>2017

In case you are interested in using body shape data (or also AMASS/MoVi original data) follow the instructions on AMASS Github page.

Tutorials

  • We have provided very brief tutorials on how to use the dataset in MoCap. Some of the functions are only provided in MATLAB or Python so please take a look at both tutorial files tutorial_MATLAB.m and tutorial_python.ipynb.

  • The tutorial on how to have access to the dataset is given here.

Important Notes

  • The video data for each round are provided as a single sequence (and not individual motions). In case you are interested in having synchronized video and AMASS (joint and body) data, you should trim F_PGx_Subject_x_L.avi files into single motion video files using single_videos.m function.
  • The timestamps (which separate motions) are provided by the name of “flags” in V3D files (only for f and s rounds). Please notice that “flags30” can be used for video data and “flags120” can be used for mocap data. The reason for having two types of flags is that video data were recorded in 30 fps and mocap data were recorded in 120 fps.
  • The body mesh is not provided in AMASS files by default. Please use amass_fk function to augment AMASS data with the corresponding body mesh (vertices). (the details are explained in the tutorial_python.ipynb)

Citation

Please cite the following paper if you use this code directly or indirectly in your research/projects:

@misc{ghorbani2020movi,
    title={MoVi: A Large Multipurpose Motion and Video Dataset},
    author={Saeed Ghorbani and Kimia Mahdaviani and Anne Thaler and Konrad Kording and Douglas James Cook and Gunnar Blohm and Nikolaus F. Troje},
    year={2020},
    eprint={2003.01888},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

License

Software Copyright License for non-commercial scientific research purposes. Before you download and/or use the Motion and Video (MoVi) dataset, please carefully read the terms and conditions stated on our website and in any accompanying documentation. If you are using the part of the dataset that was post-processed as part of AMASS, you must follow all their terms and conditions as well. By downloading and/or using the data or the code (including downloading, cloning, installing, and any other use of this GitHub repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the MoVi dataset and any associated code and software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Contact

The code in this repository is developed by Saeed Ghorbani.

If you have any questions you can contact us at [email protected].

Owner
Saeed Ghorbani
Graduate student in EECS department at York University
Saeed Ghorbani
3D detection and tracking viewer (visualization) for kitti & waymo dataset

3D detection and tracking viewer (visualization) for kitti & waymo dataset

222 Jan 08, 2023
Pytorch Implementation of PointNet and PointNet++++

Pytorch Implementation of PointNet and PointNet++ This repo is implementation for PointNet and PointNet++ in pytorch. Update 2021/03/27: (1) Release p

Luigi Ariano 1 Nov 11, 2021
SparseInst: Sparse Instance Activation for Real-Time Instance Segmentation, CVPR 2022

SparseInst 🚀 A simple framework for real-time instance segmentation, CVPR 2022 by Tianheng Cheng, Xinggang Wang†, Shaoyu Chen, Wenqiang Zhang, Qian Z

Hust Visual Learning Team 458 Jan 05, 2023
VISNOTATE: An Opensource tool for Gaze-based Annotation of WSI Data

VISNOTATE: An Opensource tool for Gaze-based Annotation of WSI Data Introduction Requirements Installation and Setup Supported Hardware and Software R

SigmaLab 1 Jun 14, 2022
Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021

LoFTR: Detector-Free Local Feature Matching with Transformers Project Page | Paper LoFTR: Detector-Free Local Feature Matching with Transformers Jiami

ZJU3DV 1.4k Jan 04, 2023
Depression Asisstant GDSC Challenge Solution

Depression Asisstant can help you give solution. Please using Python version 3.9.5 for contribute.

Ananda Rauf 1 Jan 30, 2022
A collection of semantic image segmentation models implemented in TensorFlow

A collection of semantic image segmentation models implemented in TensorFlow. Contains data-loaders for the generic and medical benchmark datasets.

bobby 16 Dec 06, 2019
PyTorch implementation of deep GRAph Contrastive rEpresentation learning (GRACE).

GRACE The official PyTorch implementation of deep GRAph Contrastive rEpresentation learning (GRACE). For a thorough resource collection of self-superv

Big Data and Multi-modal Computing Group, CRIPAC 186 Dec 27, 2022
i-RevNet Pytorch Code

i-RevNet: Deep Invertible Networks Pytorch implementation of i-RevNets. i-RevNets define a family of fully invertible deep networks, built from a succ

Jörn Jacobsen 378 Dec 06, 2022
An open source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+. Including offline map and navigation.

Pi Zero Bikecomputer An open-source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+ https://github.com/hishizuka/pizero_bikecompute

hishizuka 264 Jan 02, 2023
A hybrid SOTA solution of LiDAR panoptic segmentation with C++ implementations of point cloud clustering algorithms. ICCV21, Workshop on Traditional Computer Vision in the Age of Deep Learning

ICCVW21-TradiCV-Survey-of-LiDAR-Cluster Motivation In contrast to popular end-to-end deep learning LiDAR panoptic segmentation solutions, we propose a

YimingZhao 103 Nov 22, 2022
System Combination for Grammatical Error Correction Based on Integer Programming

System Combination for Grammatical Error Correction Based on Integer Programming This repository contains the code and scripts that implement the syst

NUS NLP Group 0 Mar 29, 2022
SuRE Evaluation: A Supplementary Material

SuRE Evaluation: A Supplementary Material This repository contains supplementary material regarding the evaluations presented in the paper Visual Expl

NYU Visualization Lab 0 Dec 14, 2021
Official implementation of Neural Bellman-Ford Networks (NeurIPS 2021)

NBFNet: Neural Bellman-Ford Networks This is the official codebase of the paper Neural Bellman-Ford Networks: A General Graph Neural Network Framework

MilaGraph 136 Dec 21, 2022
As-ViT: Auto-scaling Vision Transformers without Training

As-ViT: Auto-scaling Vision Transformers without Training [PDF] Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wang, Denny Zhou In ICLR 2

VITA 68 Sep 05, 2022
Pytorch implementation for "Distribution-Balanced Loss for Multi-Label Classification in Long-Tailed Datasets" (ECCV 2020 Spotlight)

Distribution-Balanced Loss [Paper] The implementation of our paper Distribution-Balanced Loss for Multi-Label Classification in Long-Tailed Datasets (

Tong WU 304 Dec 22, 2022
TakeInfoatNistforICS - Take Information in NIST NVD for ICS

Take Information in NIST NVD for ICS This project developed with Python. When yo

5 Sep 05, 2022
This code implements constituency parse tree aggregation

README This code implements constituency parse tree aggregation. Folder details code: This folder contains the code that implements constituency parse

Adithya Kulkarni 0 Oct 11, 2021
project page for VinVL

VinVL: Revisiting Visual Representations in Vision-Language Models Updates 02/28/2021: Project page built. Introduction This repository is the project

308 Jan 09, 2023
A toy compiler that can convert Python scripts to pickle bytecode 🥒

Pickora 🐰 A small compiler that can convert Python scripts to pickle bytecode. Requirements Python 3.8+ No third-party modules are required. Usage us

ꌗᖘ꒒ꀤ꓄꒒ꀤꈤꍟ 68 Jan 04, 2023