AIST++ API This repo contains starter code for using the AIST++ dataset.

Overview

AIST++ API

This repo contains starter code for using the AIST++ dataset. To download the dataset or explore details of this dataset, please go to our dataset website.

Installation

The code has been tested on python>=3.7. You can install the dependencies and this repo by:

pip install -r requirements.txt
python setup.py install

You also need to make sure ffmpeg is installed on your machine, if you would like to visualize the annotations using this api.

How to use

We provide demo code for loading and visualizing AIST++ annotations. Note AIST++ annotations and videos, as well as the SMPL model (for SMPL visualization only) are required to run the demo code.

The directory structure of the data is expected to be:


├── motions/
├── keypoints2d/
├── keypoints3d/
├── splits/
├── cameras/
└── ignore_list.txt


└── *.mp4


├── SMPL_MALE.pkl
└── SMPL_FEMALE.pkl

Visualize 2D keypoints annotation

The command below will plot 2D keypoints onto the raw video and save it to the directory ./visualization/.

python demos/run_vis.py \
  --anno_dir <ANNOTATIONS_DIR> \
  --video_dir <VIDEO_DIR> \
  --save_dir ./visualization/ \
  --video_name gWA_sFM_c01_d27_mWA2_ch21 \
  --mode 2D

Visualize 3D keypoints annotation

The command below will project 3D keypoints onto the raw video using camera parameters, and save it to the directory ./visualization/.

python demos/run_vis.py \
  --anno_dir <ANNOTATIONS_DIR> \
  --video_dir <VIDEO_DIR> \
  --save_dir ./visualization/ \
  --video_name gWA_sFM_c01_d27_mWA2_ch21 \
  --mode 3D

Visualize the SMPL joints annotation

The command below will first calculate the SMPL joint locations from our motion annotations (joint rotations and root trajectories), then project them onto the raw video and plot. The result will be saved into the directory ./visualization/.

python demos/run_vis.py \
  --anno_dir <ANNOTATIONS_DIR> \
  --video_dir <VIDEO_DIR> \ 
  --smpl_dir <SMPL_DIR> \
  --save_dir ./visualization/ \ 
  --video_name gWA_sFM_c01_d27_mWA2_ch21 \ 
  --mode SMPL

Multi-view 3D keypoints and motion reconstruction

This repo also provides code we used for constructing this dataset from the multi-view AIST Dance Video Database. The construction pipeline starts with frame-by-frame 2D keypoint detection and manual camera estimation. Then triangulation and bundle adjustment are applied to optimize the camera parameters as well as the 3D keypoints. Finally we sequentially fit the SMPL model to 3D keypoints to get a motion sequence represented using joint angles and a root trajectory. The following figure shows our pipeline overview.

AIST++ construction pipeline overview.

The annotations in AIST++ are in COCO-format for 2D & 3D keypoints, and SMPL-format for human motion annotations. It is designed to serve general research purposes. However, in some cases you might need the data in different format (e.g., Openpose / Alphapose keypoints format, or STAR human motion format). With the code we provide, it should be easy to construct your own version of AIST++, with your own keypoint detector or human model definition.

Step 1. Assume you have your own 2D keypoint detection results stored in , you can start by preprocessing the keypoints into the .pkl format that we support. The code we used at this step is as follows but you might need to modify the script run_preprocessing.py in order to be compatible with your own data.

python processing/run_preprocessing.py \
  --keypoints_dir <KEYPOINTS_DIR> \
  --save_dir <ANNOTATIONS_DIR>/keypoints2d/

Step 2. Then you can estimate the camera parameters using your 2D keypoints. This step is optional as you can still use our camera parameter estimates which are quite accurate. At this step, you will need the /cameras/mapping.txt file which stores the mapping from videos to different environment settings.

# If you would like to estimate your own camera parameters:
python processing/run_estimate_camera.py \
  --anno_dir <ANNOTATIONS_DIR> \
  --save_dir <ANNOTATIONS_DIR>/cameras/
# Or you can skip this step by just using our camera parameter estimates.

Step 3. Next step is to perform 3D keypoints reconstruction from multi-view 2D keypoints and camera parameters. You can just run:

python processing/run_estimate_keypoints.py \
  --anno_dir <ANNOTATIONS_DIR> \
  --save_dir <ANNOTATIONS_DIR>/keypoints3d/

Step 4. Finally we can estimate SMPL-format human motion data by fitting the 3D keypoints to the SMPL model. If you would like to use another human model such as STAR, you will need to do some modifications in the script run_estimate_smpl.py. The following command runs SMPL fitting.

python processing/run_estimate_smpl.py \
  --anno_dir <ANNOTATIONS_DIR> \
  --smpl_dir <SMPL_DIR> \
  --save_dir <ANNOTATIONS_DIR>/motions/

Note that this step will take several days to process the entire dataset if your machine has only one GPU. In practise, we run this step on a cluster, but are only able to provide the single-threaded version.

MISC.

  • COCO-format keypoint definition:
[
"nose", 
"left_eye", "right_eye", "left_ear", "right_ear", "left_shoulder","right_shoulder", 
"left_elbow", "right_elbow", "left_wrist", "right_wrist", "left_hip", "right_hip", 
"left_knee", "right_knee", "left_ankle", "right_ankle"
]
  • SMPL-format body joint definition:
[
"root", 
"left_hip", "left_knee", "left_foot", "left_toe", 
"right_hip", "right_knee", "right_foot", "right_toe",
"waist", "spine", "chest", "neck", "head", 
"left_in_shoulder", "left_shoulder", "left_elbow", "left_wrist",
"right_in_shoulder", "right_shoulder", "right_elbow", "right_wrist"
]
Owner
Google
Google ❤️ Open Source
Google
Anki Cards for the HSK vocabulary Chinese-German

Anki-HanyuShuipingKaoshi Anki Cards for the HSK vocabulary Chinese-German Das Deck baut auf folgenden Quellen auf: China Endecken Wortschatz von wohok

1 Jan 07, 2022
🔵Open many google dorks in a fasted way

Dorkinho 🔵 The author is not responsible for misuse of the tool, use it in good practices like Pentest and CTF OSINT challenges. Dorkinho is a script

SidHawks 2 May 02, 2022
Python Control Systems Library

The Python Control Systems Library is a Python module that implements basic operations for analysis and design of feedback control systems.

Control Systems Library for Python 1.3k Jan 06, 2023
Package pyVHR is a comprehensive framework for studying methods of pulse rate estimation relying on remote photoplethysmography (rPPG)

Package pyVHR (short for Python framework for Virtual Heart Rate) is a comprehensive framework for studying methods of pulse rate estimation relying on remote photoplethysmography (rPPG)

PHUSE Lab 261 Jan 03, 2023
Think DSP: Digital Signal Processing in Python, by Allen B. Downey.

ThinkDSP LaTeX source and Python code for Think DSP: Digital Signal Processing in Python, by Allen B. Downey. The premise of this book (and the other

Allen Downey 3.2k Jan 08, 2023
Structured, dependable legos for starknet development.

Structured, dependable legos for starknet development.

Alucard 127 Nov 23, 2022
A Python script to delete movies with a certain tag after a certain amount of days.

radarr_autodelete Simple script, which deletes movies with a specific tag after a certain amount of days Pip Packages pip3 install pyarr python-dotenv

7 Dec 06, 2022
Extended functionality for Namebase past their web UI

Namebase Extended Extended functionality for Namebase past their web UI.

RunDavidMC 12 Sep 02, 2022
Passenger Car Unit (PCU) Calculator

This is a streamlit web application which can be used to calculate Passenger Car Unit (PCU) values for a selected road section.

Dineth Dhananjaya 1 Apr 26, 2022
Create standalone, installable R Shiny apps using Electron

WARNING This is still very much a work in progress and nothing can be assumed stable in any way Temp notes: Two types of created installer, based on w

Chase Clark 5 Dec 24, 2021
Demo of patching a python context manager

patch-demo-20211203 demo of patching a python context manager poetry install poetry run python -m my_great_app to run the code poetry run pytest to te

Brad Smith 1 Feb 09, 2022
Data and analysis relating to the 5.8M Melbourne quake of 2021

quake2021 Data and analysis relating to the 5.8M Melbourne quake of 2021 Monash University Woodside Living Lab Building The building is located here T

Colin Caprani 6 May 16, 2022
The fetch of the delegator list and the input of the epoch nonce need to be done independently

raffle The fetch of the delegator list and the input of the epoch nonce need to be done independently. Get the list of delegators at the epoch change.

1 Dec 15, 2021
A Python library that helps data scientists to infer causation rather than observing correlation.

A Python library that helps data scientists to infer causation rather than observing correlation.

QuantumBlack Labs 1.7k Jan 04, 2023
Identify and annotate mutations from genome editing assays.

CRISPR-detector Here we propose our CRISPR-detector to facilitate the CRISPR-edited amplicon and whole genome sequencing data analysis, with functions

hlcas 2 Feb 20, 2022
A10 cipher - A Hill 2x2 cipher that totally gone wrong

A10_cipher This is a Hill 2x2 cipher that totally gone wrong, it encrypts with H

Caner Çetin 15 Oct 19, 2022
Xoroshiro-cairo - A xoroshiro128** pseudorandom number generator implementation in Cairo

xoroshiro-cairo A xoroshiro128** pseudorandom number generator implementation in

Milan Cermak 26 Oct 05, 2022
berisi kodingan kodingan python umum yang kubuat.

python-codevault berisi kodingan kodingan python umum yang kubuat. untuk memudahkan transisi dan menjadi refrensi tutorial. daily challange for myself

Agung Zon Blade 1 Dec 19, 2021
A not exist cat image generator python package

A not exist cat image generator python package

Fayas Noushad 2 Dec 03, 2021
Here, I have discuss the three methods of list reversion. The three methods are built-in method, slicing method and position changing method.

Three-different-method-for-list-reversion Here, I have discuss the three methods of list reversion. The three methods are built-in method, slicing met

Sachin Vinayak Dabhade 4 Sep 24, 2021