OpenGait is a flexible and extensible gait recognition project

Overview

Note: This code is only used for academic purposes, people cannot use this code for anything that might be considered commercial use.

OpenGait

OpenGait is a flexible and extensible gait recognition project provided by the Shiqi Yu Group and supported in part by WATRIX.AI. Just the pre-beta version is released now, and more documentations as well as the reproduced methods will be offered as soon as possible.

Highlighted features:

  • Multiple Models Support: We reproduced several SOTA methods, and reached the same or even better performance.
  • DDP Support: The officially recommended Distributed Data Parallel (DDP) mode is used during the training and testing phases.
  • AMP Support: The Auto Mixed Precision (AMP) option is available.
  • Nice log: We use tensorboard and logging to log everything, which looks pretty.

Model Zoo

Model NM BG CL Configuration Input Size Inference Time Model Size
Baseline 96.3 92.2 77.6 baseline.yaml 64x44 12s 3.78M
GaitSet(AAAI2019) 95.8(95.0) 90.0(87.2) 75.4(70.4) gaitset.yaml 64x44 11s 2.59M
GaitPart(CVPR2020) 96.1(96.2) 90.7(91.5) 78.7(78.7) gaitpart.yaml 64x44 22s 1.20M
GLN*(ECCV2020) 96.1(95.6) 92.5(92.0) 80.4(77.2) gln_phase1.yaml, gln_phase2.yaml 128x88 14s 9.46M / 15.6214M
GaitGL(ICCV2021) 97.5(97.4) 95.1(94.5) 83.5(83.6) gaitgl.yaml 64x44 31s 3.10M

The results in the parentheses are mentioned in the papers

Note:

  • All the models were tested on CASIA-B ([email protected], excluding identical-view cases).
  • The shown result of GLN is implemented without compact block.
  • Only 2 RTX6000 are used during the inference phase.
  • The results on OUMVLP will be released soon. It's inference process just cost about 90 secs(Baseline & 8 RTX6000).

Get Started

Installation

  1. clone this repo.

    git clone https://github.com/ShiqiYu/OpenGait.git
    
  2. Install dependenices:

    • pytorch >= 1.6
    • torchvision
    • pyyaml
    • tensorboard
    • opencv-python
    • tqdm

    Install dependenices by Anaconda:

    conda install tqdm pyyaml tensorboard opencv
    conda install pytorch==1.6.0 torchvision -c pytorch
    

    Or, Install dependenices by pip:

    pip install tqdm pyyaml tensorboard opencv-python
    pip install torch==1.6.0 torchvision==0.7.0
    

Prepare dataset

See prepare dataset.

Train

Train a model by

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 lib/main.py --cfgs ./config/baseline.yaml --phase train
  • python -m torch.distributed.launch Our implementation uses DistributedDataParallel.
  • --nproc_per_node The number of gpu to use, it must equal the length of CUDA_VISIBLE_DEVICES.
  • --cfgs The path of config file.
  • --phase Specified as train.
  • --iter You can specify a number of iterations or use restore_hint in the configuration file and resume training from there.
  • --log_to_file If specified, log will be written on disk simultaneously.

You can run commands in train.sh for training different models.

Test

Use trained model to evaluate by

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 lib/main.py --cfgs ./config/baseline.yaml --phase test
  • --phase Specified as test.
  • --iter You can specify a number of iterations or or use restore_hint in the configuration file and restore model from there.

Tip: Other arguments are the same as train phase.

You can run commands in test.sh for testing different models.

Customize

If you want customize your own model, see here.

Warning

  • Some models may not be compatible with AMP, you can disable it by setting enable_float16 False.
  • In DDP mode, zombie processes may occur when the program terminates abnormally. You can use this command kill $(ps aux | grep main.py | grep -v grep | awk '{print $2}') to clear them.
  • We implemented the functionality of testing while training, but it slightly affected the results. None of our published models use this functionality. You can disable it by setting with_test False.

Authors:

Open Gait Team (OGT)

Acknowledgement

Comments
  • 关于GLN网络的实现细节

    关于GLN网络的实现细节

    Detailed description

    我看论文中在lateral connections中采用的是1X1卷积,但是看代码中实现的是3X3 kernel大小。是否这样实现的性能更高,就是单纯的求问一下哈

    Then at each stage, a 1 × 1 convolutional layer is taken to rearrange the features and adjust the channel dimension

    lateral_layer 代码

    https://github.com/ShiqiYu/OpenGait/blob/6a469fcd8d0fbe23262ec3d751cf74ec341f17b6/lib/modeling/models/gln.py#L54-L59

    opened by luwanglin 12
  • Unable to find a valid cuDNN algorithm to run convolution

    Unable to find a valid cuDNN algorithm to run convolution

    用笔记本测试的,只有一块显卡所以用的CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 lib/main.py --cfgs ./config/gaitset.yaml --phase train。报错Traceback (most recent call last): File "lib/main.py", line 66, in run_model(cfgs, training) File "lib/main.py", line 49, in run_model Model.run_train(model) File "/home/yq/OpenGait/lib/modeling/base_model.py", line 371, in run_train model.train_step(loss_sum) File "/home/yq/OpenGait/lib/modeling/base_model.py", line 307, in train_step self.Scaler.scale(loss_sum).backward() File "/home/yq/anaconda3/envs/pt14/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/yq/anaconda3/envs/pt14/lib/python3.7/site-packages/torch/autograd/init.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Unable to find a valid cuDNN algorithm to run convolution Exception raised from try_all at /pytorch/aten/src/ATen/native/cudnn/Conv.cpp:692 (most recent call first): 请问是怎么回事啊

    documentation 
    opened by scyzero 10
  • Some confusion about GaitEdge forward

    Some confusion about GaitEdge forward

    In the inference of the GaitEdge , why it still need silhouette as input ? Can't it get silhouette from the output of the Segmentation U-Net? Also ,I don't understand why it need to input ratios . Isn't the end to end network just an RGB image input?

    def forward(self, inputs):
            ipts, labs, _, _, seqL = inputs
    
            ratios = ipts[0]
            rgbs = ipts[1]
            sils = ipts[2]
    
            n, s, c, h, w = rgbs.size()
            rgbs = rgbs.view(n*s, c, h, w)
            sils = sils.view(n*s, 1, h, w)
            logis = self.Backbone(rgbs)  # [n, s, c, h, w]
            logits = torch.sigmoid(logis)
            mask = torch.round(logits).float()
            if self.is_edge:
                edge_mask, eroded_mask = self.preprocess(sils)
    
                # Gait Synthesis
                new_logits = edge_mask*logits+eroded_mask*sils
    
                if self.align:
                    cropped_logits = self.gait_align(
                        new_logits, sils, ratios)
                else:
                    cropped_logits = self.resize(new_logits)
            else:
                if self.align:
                    cropped_logits = self.gait_align(
                        logits, mask, ratios)
                else:
                    cropped_logits = self.resize(logits)
            _, c, H, W = cropped_logits.size()
            cropped_logits = cropped_logits.view(n, s, H, W)
            retval = super(GaitEdge, self).forward(
                [[cropped_logits], labs, None, None, seqL])
            retval['training_feat']['bce'] = {'logits': logits, 'labels': sils}
            retval['visual_summary']['image/roi'] = cropped_logits.view(
                n*s, 1, H, W)
    
            return retval
    
    opened by enemy1205 9
  • Can the code run in Windows 11 anaconda environment?

    Can the code run in Windows 11 anaconda environment?

    System information (version)

    • Pytorch = 1.6.0
    • Operating System / Platform = Windows 11
    • Cuda = 10.1.243

    Detailed description

    The problem shows when I run into my computer with the above system information. I create a new conda env and clone this code in the env. But I cannot run the code and the problems are below: (opengait) C:\Users\Lenovo\OpenGait>set CUDA_VISIBLE_DEVICES=1 & python -m torch.distributed.launch --nproc_per_node=2 lib/main.py --cfgs ./config/baseline.yaml --phase train


    Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


    Traceback (most recent call last): Traceback (most recent call last): File "lib/main.py", line 58, in File "lib/main.py", line 58, in torch.distributed.init_process_group('nccl', init_method='env://') torch.distributed.init_process_group('nccl', init_method='env://') AttributeError: module 'torch.distributed' has no attribute 'init_process_group' AttributeError: module 'torch.distributed' has no attribute 'init_process_group' Traceback (most recent call last): File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\site-packages\torch\distributed\launch.py", line 261, in main() File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\site-packages\torch\distributed\launch.py", line 256, in main raise subprocess.CalledProcessError(returncode=process.returncode, subprocess.CalledProcessError: Command '['C:\Users\Lenovo\anaconda3\envs\opengait\python.exe', '-u', 'lib/main.py', '--local_rank=1', '--cfgs', './config/baseline.yaml', '--phase', 'train']' returned non-zero exit status 1.

    opened by robot-droid 9
  • 使用baseline模型进行测试时报错

    使用baseline模型进行测试时报错

    System information (version)

    Detailed description

    [2022-02-17 12:42:57] [INFO]: {'dataset_name': 'CASIA-B', 'dataset_root': './dataset_output', 'num_workers': 1, 'dataset_partition': './misc/partitions/CASIA-B_include_005.json', 'remove_no_gallery': False, 'cache': False, 'test_dataset_name': 'CASIA-B'} [2022-02-17 12:42:57] [INFO]: -------- Test Pid List -------- [2022-02-17 12:42:57] [INFO]: ['075'] [2022-02-17 12:43:00] [INFO]: Restore Parameters from output/CASIA-B/Baseline/Baseline/checkpoints/Baseline-60000.pt !!! [2022-02-17 12:43:00] [INFO]: Parameters Count: 3.77914M [2022-02-17 12:43:00] [INFO]: Model Initialization Finished! Transforming: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 110/110 [00:01<00:00, 76.67it/s] Traceback (most recent call last): File "lib/main.py", line 70, in run_model(cfgs, training) File "lib/main.py", line 55, in run_model Model.run_test(model) File "/home/lalaland/PycharmProjects/OpenGait_/lib/modeling/base_model.py", line 470, in run_test return eval_func(info_dict, dataset_name, **valid_args) File "/home/lalaland/PycharmProjects/OpenGait_/lib/utils/evaluation.py", line 79, in identification 0) * 100 / dist.shape[0], 2) ValueError: could not broadcast input array from shape (4,) into shape (5,) ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 10590) of binary: /home/lalaland/anaconda3/envs/torch/bin/python Traceback (most recent call last): File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in main() File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run )(*cmd_args) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

    lib/main.py FAILED

    Failures: <NO_OTHER_FAILURES>

    Root Cause (first observed failure): [0]: time : 2022-02-17_12:43:06 host : lalaland-Predator-PH317-55 rank : 0 (local_rank: 0) exitcode : 1 (pid: 10590) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

    question 
    opened by SerJamie 8
  • 这个函数的作用是什么呢?

    这个函数的作用是什么呢?

    https://github.com/ShiqiYu/OpenGait/blob/49cbc44069878210e11117697be507921db1e3fa/lib/modeling/modules.py#L34 这个函数的最后 return x.view(*_)是不是相当于 return x.view(n, s, c, h ,w)? 如果这么说的话,这个函数没有改变什么?到底发挥了什么作用?没有看懂,望指点

    question 
    opened by HiAleeYang 8
  • training GaitGL on OUMVLP with default config too slow  to tolerate.....

    training GaitGL on OUMVLP with default config too slow to tolerate.....

    I try GaitGL on OUMVLP dataset, 6000 id for training and rest for teting, using deafault config with Tesla v100*8, it's cost 165 seconds for each 100 iters. if I want 10 epochs, time = 10(epoch) x 130000(seqs) / 8(bs) / 100 * 165 = 3.1 days.

    I guess computing triplet loss cost might cost lot of time, I'm tring to modify the train procedure to, triplet loss on take effect at latter epochs, while early epochs only compute id loss, namely softmaxloss.

    Is there any other methods to speed up?

    thxs

    opened by silverlining21 8
  • GREW pretreatment `to_pickle` has size 0

    GREW pretreatment `to_pickle` has size 0

    System information (version)

    • Pytorch => 1.11
    • Operating System / Platform => Ubuntu 20.04
    • Cuda => 11.3

    Detailed description

    I'm trying to run GREW pretreatment code but it generates no GREW-pkl folder at the end of the process. I debugged myself and checked if the --dataset flag is set properly and the to_pickle list size before saving the pickle file. The flag is well set but the size of the list is always 0.

    I downloaded the GREW dataset from the link you guys sent me and made de GREW-rearranged folder using the code provided. I'll keep investigating what is causing such an error and if I find I'll set a fixing PR.

    Issue submission checklist

    • [x] I checked the problem with documentation, FAQ, issues, and have not found solution.
    no-issue-activity 
    opened by gosiqueira 7
  • what dose the 'SeqL' mean?

    what dose the 'SeqL' mean?

    Hi, https://github.com/ShiqiYu/OpenGait/blob/49cbc44069878210e11117697be507921db1e3fa/lib/modeling/modules.py#L61 I guess it means the length of the sequence .What is theseqL[0] means? By the way, what is the shape of the seqL? What does the each dimension of the seqL correspond to?

    opened by aleeyang 7
  • 如何使用Baseline-60000.pt进行测试

    如何使用Baseline-60000.pt进行测试

    我的运行命令没有反应 CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 lib/main.py --cfgs ./config/baseline.yaml --phase train 请问baseline.yaml该怎么配置

    question 
    opened by w229850 7
  • How to fix this problem when i want to test

    How to fix this problem when i want to test

    System information (version)

    • Pytorch => :grey_question:
    • Operating System / Platform => :grey_question:
    • Cuda => :grey_question:

    Detailed description

    image

    Steps to reproduce

    Issue submission checklist

    • [ ] I checked the problem with documentation, FAQ, issues, and have not found solution.
    opened by Flyooofly 6
  • Reconstruct LossAggregator and fix some typos in config files

    Reconstruct LossAggregator and fix some typos in config files

    1. new config files brought by Gait3D have some typos
    2. The old LossAggregator is a basic python class that cannot register parameters to the base_model. It is inconvenient to train models with those losses that require parameters, such as center loss.
      Therefore, it seems a better way that registers LossAggregator as an nn.Module and uses nn.ModuleDict to organize all losses. In this new version, LossAggregator can be indexed like a regular Python dictionary (keep it the same as the current version), but the modules it contains are properly registered and will be visible by all Module methods. All parameters registered in losses can be accessed by the method 'self.parameters()', enabling they can be trained properly.
    opened by wj1tr0y 0
  • ValueError: The input size of each GPU must be 1 in testing mode

    ValueError: The input size of each GPU must be 1 in testing mode

    After training GaitGL for 80000 epochs, I test GaitGL by running "CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 12345 --nproc_per_node=1 opengait/main.py --cfgs ./config/gaitgl/gaitgl.yaml --phase test", but get "ValueError: The input size of each GPU must be 1 in testing mode, but got 4!" image

    opened by zhang123-sys 5
  • What is the amount of computation and parameters of the GaitEdge?

    What is the amount of computation and parameters of the GaitEdge?

    System information (version)

    • Pytorch => :grey_question:
    • Operating System / Platform => :grey_question:
    • Cuda => :grey_question:

    Detailed description

    Steps to reproduce

    Issue submission checklist

    • [ ] I checked the problem with documentation, FAQ, issues, and have not found solution.
    opened by puyiwen 0
Releases(v1.1)
Owner
Shiqi Yu
Associate Professor, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
Shiqi Yu
An unofficial package help developers to implement ZATCA (Fatoora) QR code easily which required for e-invoicing

ZATCA (Fatoora) QR-Code Implementation An unofficial package help developers to implement ZATCA (Fatoora) QR code easily which required for e-invoicin

TheAwiteb 28 Nov 03, 2022
Vietnamese Language Detection and Recognition

Table of Content Introduction (Khôi viết) Dataset (đổi link thui thành 3k5 ảnh mình) Getting Started (An Viết) Requirements Usage Example Training & E

6 May 27, 2022
A simple Digits Recogniser made in Python

⭐ Python Digit Recogniser A simple digit Recogniser made in Python Demo Run Locally Clone the project git clone https://github.com/yashraj-n/python-

Yashraj narke 4 Nov 29, 2021
This is a GUI for scrapping PDFs with the help of optical character recognition making easier than ever to scrape PDFs.

pdf-scraper-with-ocr With this tool I am aiming to facilitate the work of those who need to scrape PDFs either by hand or using tools that doesn't imp

Jacobo José Guijarro Villalba 75 Oct 21, 2022
Deep LearningImage Captcha 2

滑动验证码深度学习识别 本项目使用深度学习 YOLOV3 模型来识别滑动验证码缺口,基于 https://github.com/eriklindernoren/PyTorch-YOLOv3 修改。 只需要几百张缺口标注图片即可训练出精度高的识别模型,识别效果样例: 克隆项目 运行命令: git cl

Python3WebSpider 117 Dec 28, 2022
question‘s area recognition using image processing and regular expression

======================================== Paper-Question-recognition ======================================== question‘s area recognition using image p

Yuta Mizuki 7 Dec 27, 2021
Distort a video using Seam Carving (video) and Vibrato effect (sound)

Distort videos Applies a Seam Carving algorithm (aka liquid rescale) on every frame of a video, and a vibrato effect on the audio to distort the video

AlexZeGamer 6 Dec 06, 2022
deployment of a hybrid model for automatic weapon detection/ anomaly detection for surveillance applications

Automatic Weapon Detection Deployment of a hybrid model for automatic weapon detection/ anomaly detection for surveillance applications. Loved the pro

Janhavi 4 Mar 04, 2022
Camera Intrinsic Calibration and Hand-Eye Calibration in Pybullet

This repository is mainly for camera intrinsic calibration and hand-eye calibration. Synthetic experiments are conducted in PyBullet simulator. 1. Tes

CAI Junhao 7 Oct 03, 2022
This repository contains the code for the paper "SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks"

SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks (CVPR 2021 Oral) This repository contains the official PyTorch implementation

Shunsuke Saito 235 Dec 18, 2022
This pyhton script converts a pdf to Image then using tesseract as OCR engine converts Image to Text

Script_Convertir_PDF_IMG_TXT Este script de pyhton convierte un pdf en Imagen luego utilizando tesseract como motor OCR convierte la Imagen a Texto. p

alebogado 1 Jan 27, 2022
ARU-Net - Deep Learning Chinese Word Segment

ARU-Net: A Neural Pixel Labeler for Layout Analysis of Historical Documents Contents Introduction Installation Demo Training Introduction This is the

128 Sep 12, 2022
Slice a single image into multiple pieces and create a dataset from them

OpenCV Image to Dataset Converter Slice a single image of Persian digits into mu

Meysam Parvizi 14 Dec 29, 2022
A python scripts that uses 3 different feature extraction methods such as SIFT, SURF and ORB to find a book in a video clip and project trailer of a movie based on that book, on to it.

A python scripts that uses 3 different feature extraction methods such as SIFT, SURF and ORB to find a book in a video clip and project trailer of a movie based on that book, on to it.

tooraj taraz 3 Feb 10, 2022
A Screen Translator/OCR Translator made by using Python and Tesseract, the user interface are made using Tkinter. All code written in python.

About An OCR translator tool. Made by me by utilizing Tesseract, compiled to .exe using pyinstaller. I made this program to learn more about python. I

Fauzan F A 41 Dec 30, 2022
MeshToGeotiff - A fast Python algorithm to convert a 3D mesh into a GeoTIFF

MeshToGeotiff - A fast Python algorithm to convert a 3D mesh into a GeoTIFF Python class for converting (very fast) 3D Meshes/Surfaces to Raster DEMs

8 Sep 10, 2022
Using python libraries to track hands

Python-HandTracking Using python libraries to track hands on a camera Uses cv2 and mediapipe libraries custom hand tracking module PyCharm IDE Final E

Martin Matsudaira 1 Dec 17, 2021
(CVPR 2021) ST3D: Self-training for Unsupervised Domain Adaptation on 3D Object Detection

ST3D Code release for the paper ST3D: Self-training for Unsupervised Domain Adaptation on 3D Object Detection, CVPR 2021 Authors: Jihan Yang*, Shaoshu

CVMI Lab 224 Dec 28, 2022
Erosion and dialation using structure element in OpenCV python

Erosion and dialation using structure element in OpenCV python

Tamzid hasan 2 Nov 11, 2021