Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19

Related tags

Deep Learning2s-AGCN
Overview

2s-AGCN

Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19

Note

PyTorch version should be 0.3! For PyTorch0.4 or higher, the codes need to be modified.
Now we have updated the code to >=Pytorch0.4.
A new model named AAGCN is added, which can achieve better performance.

Data Preparation

  • Download the raw data from NTU-RGB+D and Skeleton-Kinetics. Then put them under the data directory:

     -data\  
       -kinetics_raw\  
         -kinetics_train\
           ...
         -kinetics_val\
           ...
         -kinetics_train_label.json
         -keintics_val_label.json
       -nturgbd_raw\  
         -nturgb+d_skeletons\
           ...
         -samples_with_missing_skeletons.txt
    
  • Preprocess the data with

    python data_gen/ntu_gendata.py

    python data_gen/kinetics-gendata.py.

  • Generate the bone data with:

    python data_gen/gen_bone_data.py

Training & Testing

Change the config file depending on what you want.

`python main.py --config ./config/nturgbd-cross-view/train_joint.yaml`

`python main.py --config ./config/nturgbd-cross-view/train_bone.yaml`

To ensemble the results of joints and bones, run test firstly to generate the scores of the softmax layer.

`python main.py --config ./config/nturgbd-cross-view/test_joint.yaml`

`python main.py --config ./config/nturgbd-cross-view/test_bone.yaml`

Then combine the generated scores with:

`python ensemble.py` --datasets ntu/xview

Citation

Please cite the following paper if you use this repository in your reseach.

@inproceedings{2sagcn2019cvpr,  
      title     = {Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition},  
      author    = {Lei Shi and Yifan Zhang and Jian Cheng and Hanqing Lu},  
      booktitle = {CVPR},  
      year      = {2019},  
}

@article{shi_skeleton-based_2019,
    title = {Skeleton-{Based} {Action} {Recognition} with {Multi}-{Stream} {Adaptive} {Graph} {Convolutional} {Networks}},
    journal = {arXiv:1912.06971 [cs]},
    author = {Shi, Lei and Zhang, Yifan and Cheng, Jian and LU, Hanqing},
    month = dec,
    year = {2019},
}

Contact

For any questions, feel free to contact: [email protected]

Comments
  • Memory overloading issue

    Memory overloading issue

    First of all, thanks a lot for making your code public. I am trying to do the experiment on NTU RGB D 120 dataset and I have split the data into training and testing in CS as given in the NTU-RGB D 120 paper. I have 63026 training samples and 54702 testing samples. I am trying to train the model on a GPU cluster but after running for one epoch, my model exceeds the memory limit: image I try to clear the cache explicitly using gc.collect but the model still continues to grow in size. It will be great if you can help regarding this.

    opened by Anirudh257 46
  • I got some wrong when I was training the net

    I got some wrong when I was training the net

    首先我是得到了下面这个error, 1

    注释掉该参数后,got another error

    I got this error ,but I don't know how to solve. Could you give me some advice?

    Traceback (most recent call last): File "/home/sues/Desktop/2s-AGCN-master/main.py", line 550, in processor.start() File "/home/sues/Desktop/2s-AGCN-master/main.py", line 491, in start self.train(epoch, save_model=save_model) File "/home/sues/Desktop/2s-AGCN-master/main.py", line 372, in train loss.backward() File "/home/sues/anaconda3/envs/2sAGCN/lib/python3.5/site-pac[kages/torch/autograd/variable.py", line 167, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables) File "/home/sues/anaconda3/envs/2sAGCN/lib/python3.5/site-packages/torch/autograd/init.py", line 99, in backward variables, grad_variables, retain_graph) RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/torch/lib/THC/generic/THCTensorMath.cu:26 /pytorch/torch/lib/THCUNN/ClassNLLCriterion.cu:101: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes 2

    opened by Dongjiuqing 10
  • 内存分配不够,Unable to allocate 29.0 GiB for an array with shape (7790126400,) and data type float32

    内存分配不够,Unable to allocate 29.0 GiB for an array with shape (7790126400,) and data type float32

    当运行python data_gen/gen_bone_data.py这据代码时,会在 File "data_gen/gen_bone_data.py", line 62, in data = np.load('./data/{}/{}_data.npy'.format(dataset, set)) 处遇到 MemoryError: Unable to allocate 29.0 GiB for an array with shape (7790126400,) and data type float32 这样的错误,请问该如何解决呢?

    opened by XieLinMofromsomewhere 7
  • augmentation in feeder

    augmentation in feeder

    Hi, I want to know the data augmentation in the feeder has not improved? Does the length of the input have a big influence? Also, have you trained the model on the 120 dataset? How's the accuracy?

    opened by VSunN 7
  • problem with gen_bone_data.py

    problem with gen_bone_data.py

    你好,请问一下我跑gen_bone_data.py时报错,好像是矩阵的维度有问题,该怎么解决,谢谢 [email protected]:~/2s-AGCN-master/data_gen$ python gen_bone_data.py ntu/xsub train 4%|█▋ | 1/25 [06:49<2:43:40, 409.20s/it]Traceback (most recent call last): File "gen_bone_data.py", line 50, in fp_sp[:, :, :, v1, :] = data[:, :, :, v1, :] - data[:, :, :, v2, :] IndexError: index 20 is out of bounds for axis 3 with size 18 4%|█▋ | 1/25 [06:49<2:43:45, 409.41s/it]

    opened by JaxferZ 5
  • Accuracy of aagcn

    Accuracy of aagcn

    I ran your implemented code using J-AAGCN and NTU-RGBD CV dataset. But Accuracy is 94.64, not 95.1 in your paper. What is the difference? The batch size was 32, not 64 because of the resource limit. Are there any other things to be aware of? I use your implemented code.

    opened by ilikeokoge 4
  • 用released model做test的时候提示 Unexpected key(s) in state_dict:

    用released model做test的时候提示 Unexpected key(s) in state_dict:

    python main.py --config ./config/nturgbd-cross-view/test_joint.yaml 这段代码能得到论文的结果。 但是到了这段 python main.py --config ./config/nturgbd-cross-view/test_bone.yaml``,会提示RuntimeError: Error(s) in loading state_dict for Model:`

    Unexpected key(s) in state_dict: "l1.gcn1.conv_res.0.weight", "l1.gcn1.conv_res.0.bias", "l1.gcn1.conv_res.1.weigh t", "l1.gcn1.conv_res.1.bias", "l1.gcn1.conv_res.1.running_mean", "l1.gcn1.conv_res.1.running_var", "l5.gcn1.conv_res.0.we ight", "l5.gcn1.conv_res.0.bias", "l5.gcn1.conv_res.1.weight", "l5.gcn1.conv_res.1.bias", "l5.gcn1.conv_res.1.running_mean ", "l5.gcn1.conv_res.1.running_var", "l8.gcn1.conv_res.0.weight", "l8.gcn1.conv_res.0.bias", "l8.gcn1.conv_res.1.weight", "l8.gcn1.conv_res.1.bias", "l8.gcn1.conv_res.1.running_mean", "l8.gcn1.conv_res.1.running_var".

    看起来是这个pretrained模型与提供的代码不匹配,我怎么做才能得到结果呢! 期待回复!

    opened by tailin1009 3
  • dataload error

    dataload error

    thank your source code, but when I run this code, The following error occurs: ValueError: num_samples should be a positive integer value, but got num_samples=0

    I've run the program 'python data_gen/ntu_gendata.py 'before, and some documents were generated : train_data_joint.npy train_label.pkl val_data_joint.npy val_label.pkl

    but their size are all 1K

    How should I deal with, trouble you give directions.

    thanks

    opened by xuanshibin 3
  • RuntimeError: running_mean should contain 126 elements not 63 (example).

    RuntimeError: running_mean should contain 126 elements not 63 (example).

    What is your elements for number of joints (18)? When I run your code, I got this error " RuntimeError: running_mean should contain 126 elements not 63". 63 means I change number of node. How to adjust these elements and how to get your elements 126 for your experiment?

    opened by JasOlean 3
  • what is (N, C, T, V, M) in agcn.py?

    what is (N, C, T, V, M) in agcn.py?

    thank you for sharing code and information :) I have some question about agcn.py code

    1. what is (N, C, T, V, M) in agcn.py? i guess T is 300 frame, V is the similarity between nodes, M is number of men in one video, but i am not sure that it is right

    2. are bone train code and joint train(agcn.py) code same? if it is not, is bone train code aagcn.py?

    opened by lodado 2
  • No module named 'data_gen'  and  No such file or directory: '../data/kinetics_raw/kinetics_val'

    No module named 'data_gen' and No such file or directory: '../data/kinetics_raw/kinetics_val'

    When I run "python data_gen/ntu_gendata.py", gets the error : ModuleNotFoundError: No module named 'data_gen'.

    When I run "python data_gen/kinetics_gendata.py", gets the error : FileNotFoundError: [Errno 2] No such file or directory: '../data/kinetics_raw/kinetics_val'.

    My raw data has put in the ./data.

    Needs your help!

    opened by XiongXintyw 2
  • 关于MS-AAGCN的运行问题

    关于MS-AAGCN的运行问题

    大佬您好!我十分有幸拜读了您的文章《Skeleton-Based Action Recognition with Multi-Stream Adaptive Graph Convolutional Networks》,受益匪浅!我已经跑通了2S-AGCN的代码,想和您请教一下MS-AAGCN的代码该如何运行呢?

    opened by 15762260991 1
  • 注意力模块中参数A的定义

    注意力模块中参数A的定义

    在复现代码时 找不到关于图卷积层中参数A的定义 请问这个A指的是什么呢: class TCN_GCN_unit(nn.Module): def init(self, in_channels, out_channels, A, stride=1, residual=True, adaptive=True, attention=True):

    opened by wangxx0101 1
  • 关于自适应时,tanh和softmax函数的问题

    关于自适应时,tanh和softmax函数的问题

    大佬您好,有两个问题想请教一下。 ①tanh激活函数,它将返回一个范围在[- 1,1]的值,softmax激活函数返回一个[0, 1]的值,当我们建模关节之间的相关性时,如果使用tanh返回为负值的时候,是说明这两个关节负相关吗? ②为什么tanh的效果会比softmax好一点,这个我不是太懂,您可以详细的讲解一下吗?

    opened by blue-q 0
  • Where is the code for visualization in Figure 8 and 9?

    Where is the code for visualization in Figure 8 and 9?

    Dear Authors,

    I have already read your "Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition". In that paper, you showed some experimental results in Figure 8 and 9. I would like to know which part of the code for that. Or, how to use layers to show these result's visualization? If you answer my question, I will really appreciate you. Thank you.

    opened by JasOlean 3
Releases(v0.0)
Owner
LShi
Video Analysis, Action Recognition.
LShi
Activity tragle - Google is tracking everything, we just look at it

activity_tragle Google is tracking everything, we just look at it here. You need

BERNARD Guillaume 1 Feb 15, 2022
Baseline for the Spoofing-aware Speaker Verification Challenge 2022

Introduction This repository contains several materials that supplements the Spoofing-Aware Speaker Verification (SASV) Challenge 2022 including: calc

40 Dec 28, 2022
Next-gen Rowhammer fuzzer that uses non-uniform, frequency-based patterns.

Blacksmith Rowhammer Fuzzer This repository provides the code accompanying the paper Blacksmith: Scalable Rowhammering in the Frequency Domain that is

Computer Security Group @ ETH Zurich 173 Nov 16, 2022
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations)

Graph Neural Networks with Learnable Structural and Positional Representations Source code for the paper "Graph Neural Networks with Learnable Structu

Vijay Prakash Dwivedi 180 Dec 22, 2022
Implementation of Google Brain's WaveGrad high-fidelity vocoder

WaveGrad Implementation (PyTorch) of Google Brain's high-fidelity WaveGrad vocoder (paper). First implementation on GitHub with high-quality generatio

Ivan Vovk 363 Dec 27, 2022
Code for NAACL 2021 full paper "Efficient Attentions for Long Document Summarization"

LongDocSum Code for NAACL 2021 paper "Efficient Attentions for Long Document Summarization" This repository contains data and models needed to reprodu

56 Jan 02, 2023
'Aligned mixture of latent dynamical systems' (amLDS) for stimulus decoding probabilistic manifold alignment across animals. P. Herrero-Vidal et al. NeurIPS 2021 code.

Across-animal odor decoding by probabilistic manifold alignment (NeurIPS 2021) This repository is the official implementation of aligned mixture of la

Pedro Herrero-Vidal 3 Jul 12, 2022
HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision

HugsVision is an open-source and easy to use all-in-one huggingface wrapper for computer vision. The goal is to create a fast, flexible and user-frien

Labrak Yanis 166 Nov 27, 2022
This repository contains the implementation of the HealthGen model, a generative model to synthesize realistic EHR time series data with missingness

HealthGen: Conditional EHR Time Series Generation This repository contains the implementation of the HealthGen model, a generative model to synthesize

0 Jan 20, 2022
IAUnet: Global Context-Aware Feature Learning for Person Re-Identification

IAUnet This repository contains the code for the paper: IAUnet: Global Context-Aware Feature Learning for Person Re-Identification Ruibing Hou, Bingpe

30 Jul 14, 2022
Multi-layer convolutional LSTM with Pytorch

Convolution_LSTM_pytorch Thanks for your attention. I haven't got time to maintain this repo for a long time. I recommend this repo which provides an

Zijie Zhuang 734 Jan 03, 2023
This is the source code for generating the ASL-Skeleton3D and ASL-Phono datasets. Check out the README.md for more details.

ASL-Skeleton3D and ASL-Phono Datasets Generator The ASL-Skeleton3D contains a representation based on mapping into the three-dimensional space the coo

Cleison Amorim 5 Nov 20, 2022
Style transfer, deep learning, feature transform

FastPhotoStyle License Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons

NVIDIA Corporation 10.9k Jan 02, 2023
HandTailor: Towards High-Precision Monocular 3D Hand Recovery

HandTailor This repository is the implementation code and model of the paper "HandTailor: Towards High-Precision Monocular 3D Hand Recovery" (arXiv) G

Lv Jun 113 Jan 06, 2023
The official implementation of "Rethink Dilated Convolution for Real-time Semantic Segmentation"

RegSeg The official implementation of "Rethink Dilated Convolution for Real-time Semantic Segmentation" Paper: arxiv D block Decoder Setup Install the

Roland 61 Dec 27, 2022
Evaluating deep transfer learning for whole-brain cognitive decoding

Evaluating deep transfer learning for whole-brain cognitive decoding This README file contains the following sections: Project description Repository

Armin Thomas 5 Oct 31, 2022
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
PyTorch implementations of the beta divergence loss.

Beta Divergence Loss - PyTorch Implementation This repository contains code for a PyTorch implementation of the beta divergence loss. Dependencies Thi

Billy Carson 7 Nov 09, 2022
Convert Table data to approximate values with GUI

Table_Editor Convert Table data to approximate values with GUIs... usage - Import methods for extension Tables. Imported method supposed to have only

CLJ 1 Jan 10, 2022
Real-time object detection on Android using the YOLO network with TensorFlow

TensorFlow YOLO object detection on Android Source project android-yolo is the first implementation of YOLO for TensorFlow on an Android device. It is

Nataniel Ruiz 624 Jan 03, 2023