This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)"

Overview

Gait3D-Benchmark

This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)". The official project page is here.

What's New

  • [Mar 2022] Another gait in the wild dataset GREW is supported.
  • [Mar 2022] Our Gait3D dataset and SMPLGait method are released.

Model Zoo

Gait3D

Input Size: 128x88(64x44)

Method [email protected] [email protected] mAP mINP download
GaitSet(AAAI2019)) 42.60(36.70) 63.10(58.30) 33.69(30.01) 19.69(17.30) model-128(model-64)
GaitPart(CVPR2020) 29.90(28.20) 50.60(47.60) 23.34(21.58) 13.15(12.36) model-128(model-64)
GLN(ECCV2020) 42.20(31.40) 64.50(52.90) 33.14(24.74) 19.56(13.58) model-128(model-64)
GaitGL(ICCV2021) 23.50(29.70) 38.50(48.50) 16.40(22.29) 9.20(13.26) model-128(model-64)
OpenGait Baseline* 47.70(42.90) 67.20(63.90) 37.62(35.19) 22.24(20.83) model-128(model-64)
SMPLGait(CVPR2022) 53.20(46.30) 71.00(64.50) 42.43(37.16) 25.97(22.23) model-128(model-64)

*It should be noticed that OpenGait Baseline is equal to SMPLGait w/o 3D in our paper.

Cross Domain

Datasets in the Wild (GaitSet, 64x44)

Source Target [email protected] [email protected] mAP
GREW (official split) Gait3D 15.80 30.20 11.83
GREW (our split) 16.50 31.10 11.71
Gait3D GREW (official split) 18.81 32.25 ~
GREW (our split) 43.86 60.89 28.06

Requirements

  • pytorch >= 1.6
  • torchvision
  • pyyaml
  • tensorboard
  • opencv-python
  • tqdm
  • py7zr
  • tabulate
  • termcolor

Installation

You can replace the second command from the bottom to install pytorch based on your CUDA version.

git clone https://github.com/Gait3D/Gait3D-Benchmark.git
cd Gait3D-Benchmark
conda create --name py37torch160 python=3.7
conda activate py37torch160
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch
pip install tqdm pyyaml tensorboard opencv-python tqdm py7zr tabulate termcolor

Data Preparation

Please download the Gait3D dataset by signing an agreement. We ask for your information only to make sure the dataset is used for non-commercial purposes. We will not give it to any third party or publish it publicly anywhere.

Data Pretreatment

Run the following command to preprocess the Gait3D dataset.

python misc/pretreatment.py --input_path 'Gait3D/2D_Silhouettes' --output_path 'Gait3D-sils-64-44-pkl' --img_h 64 --img_w 44
python misc/pretreatment.py --input_path 'Gait3D/2D_Silhouettes' --output_path 'Gait3D-sils-128-88-pkl' --img_h 128 --img_w 88
python misc/pretreatment_smpl.py --input_path 'Gait3D/3D_SMPLs' --output_path 'Gait3D-smpls-pkl'

Data Structrue

After the pretreatment, the data structure under the directory should like this

├── Gait3D-sils-64-44-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl
├── Gait3D-sils-128-88-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl
├── Gait3D-smpls-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl

Train

Run the following command:

sh train.sh

Test

Run the following command:

sh test.sh

Citation

Please cite this paper in your publications if it helps your research:

@inproceedings{zheng2022gait3d,
  title={Gait Recognition in the Wild with Dense 3D Representations and A Benchmark},
  author={Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Acknowledgement

Here are some great resources we benefit:

  • The codebase is based on OpenGait.
  • The 3D SMPL data is obtained by ROMP.
  • The 2D Silhouette data is obtained by HRNet-segmentation.
  • The 2D pose data is obtained by HRNet.
  • The ReID featrue used to make Gait3D is obtained by FastReID.
Comments
  • lib/modeling/models/smplgait.py throwing error when training a new dataset

    lib/modeling/models/smplgait.py throwing error when training a new dataset

    Hi Jinkai,

    When I try to use the SMPLGait to apply on other dataset, during the training process, the smplgait.py throws the error that: smpls = ipts[1][0] # [n, s, d] IndexError: list index out of range It is also interesting that I used 4 GPUs in the training. 3 of them could detect the the ipts[1][0] tensor with size 1. However, the fourth one failed to do so. Could I know how I can solve this?

    opened by zhiyuann 7
  • I have a few questions about Gait3D-Benchmark Datasets

    I have a few questions about Gait3D-Benchmark Datasets

    Hi. Im jjun. I read your paper impressively.

    We don't currently live in China, so it is difficult to use dataset on baidu disk.

    If you don't mind, is there a way to download the dataset to another disk (e.g Google drive)?

    opened by jjunnii 6
  • Question about 3D SMPL skeleton topology diagram

    Question about 3D SMPL skeleton topology diagram

    Your work promotes the application of gait recognition in real scenes, can you provide the topology diagram of the SMPL 3D skeleton in Gait3D? Because the specific meaning of the 24 joint points is not stated in your data description document.

    opened by HL-HYX 4
  • ROMP SMPL transfer

    ROMP SMPL transfer

    When I try to use the ROMP to generate out the 3D mesh, I detect there is a version conflict with the ROMP used by SMPLGait. Could I know which version of the ROMP the SMPLGait used? In this way I could use the SMPLGait to run on other ReID dataset.

    opened by zhiyuann 3
  • question about iteration and epoch

    question about iteration and epoch

    Hi! The total iteration in your code is set to 180000, and you report the total epoch as 1200 in your paper. What's the relationship between iteration and epoch?

    opened by yan811 2
  • About data generation

    About data generation

    Hi! I 'd like to know some details about data generation in NPZ files.

    In npz file: 1 What's the order of "pose"? SMPL pose parameter should be [24,3] dim, how did you convert it to [72,]? The order is [keypoint1_angel1, keypoint1_angle2, keypoint1_angle3, keypoint2_angel1, keypoint2_angle2, keypoint2_angle3...] or [keypoint1_angle1, keypoint2_angle1... keypoint1_angle2, keypoint2_angle2... keypoint1_angle3, keypoint1_angle3... ] ?

    2 How did you generate pose into SMPL format,SPIN format , and OpenPose format? What's the order of the second dim? Is the keypoint order the same with SMPL model?

    3 In pkl file: For example, dim of data in './0000/camid0_videoid2/seq0/seq0.pkl' is [48,85]. What's the order of dim 1? Is it ordered by time order or shuffled?

    opened by yan811 2
  • GREW pretreatment `to_pickle` has size 0

    GREW pretreatment `to_pickle` has size 0

    I'm trying to run GREW pretreatment code but it generates no GREW-pkl folder at the end of the process. I debugged myself and checked if the --dataset flag is set properly and the to_pickle list size before saving the pickle file. The flag is well set but the size of the list is always 0.

    I downloaded the GREW dataset from the link you guys sent me and made de GREW-rearranged folder using the code provided. I'll keep investigating what is causing such an error and if I find I'll set a fixing PR.

    opened by gosiqueira 1
  • About the pose data

    About the pose data

    Can you make a detailed description of the pose data? This is the path of one frame pose and the corresponding content of the txt file Gait3D/2D_Poses/0000/camid9_videoid2/seq0/human_crop_f17279.txt '311,438,89.201164,62.87694,0.57074964,89.201164,54.322254,0.47146344,84.92382,62.87694,0.63443935,42.150383....' I have 3 questions. Q1: what does 'f17279' means? Q2: what does the first number (e.g. 311) in the txt file mean? Q3: which number('f17279' or '311') should I regard as a base when I order the sequence? Thank you very much!

    opened by HiAleeYang 0
Owner
Official repo for Gait3D dataset
TensorFlow implementation of original paper : https://github.com/hszhao/PSPNet

Keras implementation of PSPNet(caffe) Implemented Architecture of Pyramid Scene Parsing Network in Keras. For the best compability please use Python3.

VladKry 386 Dec 29, 2022
Project repo for Learning Category-Specific Mesh Reconstruction from Image Collections

Learning Category-Specific Mesh Reconstruction from Image Collections Angjoo Kanazawa*, Shubham Tulsiani*, Alexei A. Efros, Jitendra Malik University

438 Dec 22, 2022
code for "Feature Importance-aware Transferable Adversarial Attacks"

Feature Importance-aware Attack(FIA) This repository contains the code for the paper: Feature Importance-aware Transferable Adversarial Attacks (ICCV

Hengchang Guo 44 Nov 24, 2022
Toontown House CT Edition

Toontown House: Classic Toontown House Classic source that should just work. ❓ W

Open Source Toontown Servers 5 Jan 09, 2022
This is the code for the paper "Motion-Focused Contrastive Learning of Video Representations" (ICCV'21).

Motion-Focused Contrastive Learning of Video Representations Introduction This is the code for the paper "Motion-Focused Contrastive Learning of Video

11 Sep 23, 2022
Toolbox of models, callbacks, and datasets for AI/ML researchers.

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch Website • Installation • Main

Pytorch Lightning 1.4k Dec 30, 2022
A Pytorch implementation of MoveNet from Google. Include training code and pre-train model.

Movenet.Pytorch Intro MoveNet is an ultra fast and accurate model that detects 17 keypoints of a body. This is A Pytorch implementation of MoveNet fro

Mr.Fire 241 Dec 26, 2022
😇A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc

------ Update September 2018 ------ It's been a year since TorchMoji and DeepMoji were released. We're trying to understand how it's being used such t

Hugging Face 865 Dec 24, 2022
Self-supervised Deep LiDAR Odometry for Robotic Applications

DeLORA: Self-supervised Deep LiDAR Odometry for Robotic Applications Overview Paper: link Video: link ICRA Presentation: link This is the correspondin

Robotic Systems Lab - Legged Robotics at ETH Zürich 181 Dec 29, 2022
Official implementation of deep-multi-trajectory-based single object tracking (IEEE T-CSVT 2021).

DeepMTA_PyTorch Officical PyTorch Implementation of "Dynamic Attention-guided Multi-TrajectoryAnalysis for Single Object Tracking", Xiao Wang, Zhe Che

Xiao Wang(王逍) 7 Dec 03, 2022
Code and data for ACL2021 paper Cross-Lingual Abstractive Summarization with Limited Parallel Resources.

Multi-Task Framework for Cross-Lingual Abstractive Summarization (MCLAS) The code for ACL2021 paper Cross-Lingual Abstractive Summarization with Limit

Yu Bai 43 Nov 07, 2022
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 21k Jan 06, 2023
The Dual Memory is build from a simple CNN for the deep memory and Linear Regression fro the fast Memory

Simple-DMA a simple Dual Memory Architecture for classifications. based on the paper Dual-Memory Deep Learning Architectures for Lifelong Learning of

1 Jan 27, 2022
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics. By Andres Milioto @ University of Bonn. (for the new P

Photogrammetry & Robotics Bonn 314 Dec 30, 2022
Feature extraction made simple with torchextractor

torchextractor: PyTorch Intermediate Feature Extraction Introduction Too many times some model definitions get remorselessly copy-pasted just because

Antoine Broyelle 89 Oct 31, 2022
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 59 Dec 17, 2022
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

30 Dec 24, 2022
Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONNX.

ONNX-HybridNets-Multitask-Road-Detection Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONN

Ibai Gorordo 45 Jan 01, 2023
An Unbiased Learning To Rank Algorithms (ULTRA) toolbox

Unbiased Learning to Rank Algorithms (ULTRA) This is an Unbiased Learning To Rank Algorithms (ULTRA) toolbox, which provides a codebase for experiment

back 3 Nov 18, 2022
Optimal space decomposition based-product quantization for approximate nearest neighbor search

Optimal space decomposition based-product quantization for approximate nearest neighbor search Abstract Product quantization(PQ) is an effective neare

Mylove 1 Nov 19, 2021