Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19

Related tags

Deep Learning2s-AGCN
Overview

2s-AGCN

Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19

Note

PyTorch version should be 0.3! For PyTorch0.4 or higher, the codes need to be modified.
Now we have updated the code to >=Pytorch0.4.
A new model named AAGCN is added, which can achieve better performance.

Data Preparation

  • Download the raw data from NTU-RGB+D and Skeleton-Kinetics. Then put them under the data directory:

     -data\  
       -kinetics_raw\  
         -kinetics_train\
           ...
         -kinetics_val\
           ...
         -kinetics_train_label.json
         -keintics_val_label.json
       -nturgbd_raw\  
         -nturgb+d_skeletons\
           ...
         -samples_with_missing_skeletons.txt
    
  • Preprocess the data with

    python data_gen/ntu_gendata.py

    python data_gen/kinetics-gendata.py.

  • Generate the bone data with:

    python data_gen/gen_bone_data.py

Training & Testing

Change the config file depending on what you want.

`python main.py --config ./config/nturgbd-cross-view/train_joint.yaml`

`python main.py --config ./config/nturgbd-cross-view/train_bone.yaml`

To ensemble the results of joints and bones, run test firstly to generate the scores of the softmax layer.

`python main.py --config ./config/nturgbd-cross-view/test_joint.yaml`

`python main.py --config ./config/nturgbd-cross-view/test_bone.yaml`

Then combine the generated scores with:

`python ensemble.py` --datasets ntu/xview

Citation

Please cite the following paper if you use this repository in your reseach.

@inproceedings{2sagcn2019cvpr,  
      title     = {Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition},  
      author    = {Lei Shi and Yifan Zhang and Jian Cheng and Hanqing Lu},  
      booktitle = {CVPR},  
      year      = {2019},  
}

@article{shi_skeleton-based_2019,
    title = {Skeleton-{Based} {Action} {Recognition} with {Multi}-{Stream} {Adaptive} {Graph} {Convolutional} {Networks}},
    journal = {arXiv:1912.06971 [cs]},
    author = {Shi, Lei and Zhang, Yifan and Cheng, Jian and LU, Hanqing},
    month = dec,
    year = {2019},
}

Contact

For any questions, feel free to contact: [email protected]

Comments
  • Memory overloading issue

    Memory overloading issue

    First of all, thanks a lot for making your code public. I am trying to do the experiment on NTU RGB D 120 dataset and I have split the data into training and testing in CS as given in the NTU-RGB D 120 paper. I have 63026 training samples and 54702 testing samples. I am trying to train the model on a GPU cluster but after running for one epoch, my model exceeds the memory limit: image I try to clear the cache explicitly using gc.collect but the model still continues to grow in size. It will be great if you can help regarding this.

    opened by Anirudh257 46
  • I got some wrong when I was training the net

    I got some wrong when I was training the net

    首先我是得到了下面这个error, 1

    注释掉该参数后,got another error

    I got this error ,but I don't know how to solve. Could you give me some advice?

    Traceback (most recent call last): File "/home/sues/Desktop/2s-AGCN-master/main.py", line 550, in processor.start() File "/home/sues/Desktop/2s-AGCN-master/main.py", line 491, in start self.train(epoch, save_model=save_model) File "/home/sues/Desktop/2s-AGCN-master/main.py", line 372, in train loss.backward() File "/home/sues/anaconda3/envs/2sAGCN/lib/python3.5/site-pac[kages/torch/autograd/variable.py", line 167, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables) File "/home/sues/anaconda3/envs/2sAGCN/lib/python3.5/site-packages/torch/autograd/init.py", line 99, in backward variables, grad_variables, retain_graph) RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/torch/lib/THC/generic/THCTensorMath.cu:26 /pytorch/torch/lib/THCUNN/ClassNLLCriterion.cu:101: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes 2

    opened by Dongjiuqing 10
  • 内存分配不够,Unable to allocate 29.0 GiB for an array with shape (7790126400,) and data type float32

    内存分配不够,Unable to allocate 29.0 GiB for an array with shape (7790126400,) and data type float32

    当运行python data_gen/gen_bone_data.py这据代码时,会在 File "data_gen/gen_bone_data.py", line 62, in data = np.load('./data/{}/{}_data.npy'.format(dataset, set)) 处遇到 MemoryError: Unable to allocate 29.0 GiB for an array with shape (7790126400,) and data type float32 这样的错误,请问该如何解决呢?

    opened by XieLinMofromsomewhere 7
  • augmentation in feeder

    augmentation in feeder

    Hi, I want to know the data augmentation in the feeder has not improved? Does the length of the input have a big influence? Also, have you trained the model on the 120 dataset? How's the accuracy?

    opened by VSunN 7
  • problem with gen_bone_data.py

    problem with gen_bone_data.py

    你好,请问一下我跑gen_bone_data.py时报错,好像是矩阵的维度有问题,该怎么解决,谢谢 [email protected]:~/2s-AGCN-master/data_gen$ python gen_bone_data.py ntu/xsub train 4%|█▋ | 1/25 [06:49<2:43:40, 409.20s/it]Traceback (most recent call last): File "gen_bone_data.py", line 50, in fp_sp[:, :, :, v1, :] = data[:, :, :, v1, :] - data[:, :, :, v2, :] IndexError: index 20 is out of bounds for axis 3 with size 18 4%|█▋ | 1/25 [06:49<2:43:45, 409.41s/it]

    opened by JaxferZ 5
  • Accuracy of aagcn

    Accuracy of aagcn

    I ran your implemented code using J-AAGCN and NTU-RGBD CV dataset. But Accuracy is 94.64, not 95.1 in your paper. What is the difference? The batch size was 32, not 64 because of the resource limit. Are there any other things to be aware of? I use your implemented code.

    opened by ilikeokoge 4
  • 用released model做test的时候提示 Unexpected key(s) in state_dict:

    用released model做test的时候提示 Unexpected key(s) in state_dict:

    python main.py --config ./config/nturgbd-cross-view/test_joint.yaml 这段代码能得到论文的结果。 但是到了这段 python main.py --config ./config/nturgbd-cross-view/test_bone.yaml``,会提示RuntimeError: Error(s) in loading state_dict for Model:`

    Unexpected key(s) in state_dict: "l1.gcn1.conv_res.0.weight", "l1.gcn1.conv_res.0.bias", "l1.gcn1.conv_res.1.weigh t", "l1.gcn1.conv_res.1.bias", "l1.gcn1.conv_res.1.running_mean", "l1.gcn1.conv_res.1.running_var", "l5.gcn1.conv_res.0.we ight", "l5.gcn1.conv_res.0.bias", "l5.gcn1.conv_res.1.weight", "l5.gcn1.conv_res.1.bias", "l5.gcn1.conv_res.1.running_mean ", "l5.gcn1.conv_res.1.running_var", "l8.gcn1.conv_res.0.weight", "l8.gcn1.conv_res.0.bias", "l8.gcn1.conv_res.1.weight", "l8.gcn1.conv_res.1.bias", "l8.gcn1.conv_res.1.running_mean", "l8.gcn1.conv_res.1.running_var".

    看起来是这个pretrained模型与提供的代码不匹配,我怎么做才能得到结果呢! 期待回复!

    opened by tailin1009 3
  • dataload error

    dataload error

    thank your source code, but when I run this code, The following error occurs: ValueError: num_samples should be a positive integer value, but got num_samples=0

    I've run the program 'python data_gen/ntu_gendata.py 'before, and some documents were generated : train_data_joint.npy train_label.pkl val_data_joint.npy val_label.pkl

    but their size are all 1K

    How should I deal with, trouble you give directions.

    thanks

    opened by xuanshibin 3
  • RuntimeError: running_mean should contain 126 elements not 63 (example).

    RuntimeError: running_mean should contain 126 elements not 63 (example).

    What is your elements for number of joints (18)? When I run your code, I got this error " RuntimeError: running_mean should contain 126 elements not 63". 63 means I change number of node. How to adjust these elements and how to get your elements 126 for your experiment?

    opened by JasOlean 3
  • what is (N, C, T, V, M) in agcn.py?

    what is (N, C, T, V, M) in agcn.py?

    thank you for sharing code and information :) I have some question about agcn.py code

    1. what is (N, C, T, V, M) in agcn.py? i guess T is 300 frame, V is the similarity between nodes, M is number of men in one video, but i am not sure that it is right

    2. are bone train code and joint train(agcn.py) code same? if it is not, is bone train code aagcn.py?

    opened by lodado 2
  • No module named 'data_gen'  and  No such file or directory: '../data/kinetics_raw/kinetics_val'

    No module named 'data_gen' and No such file or directory: '../data/kinetics_raw/kinetics_val'

    When I run "python data_gen/ntu_gendata.py", gets the error : ModuleNotFoundError: No module named 'data_gen'.

    When I run "python data_gen/kinetics_gendata.py", gets the error : FileNotFoundError: [Errno 2] No such file or directory: '../data/kinetics_raw/kinetics_val'.

    My raw data has put in the ./data.

    Needs your help!

    opened by XiongXintyw 2
  • 关于MS-AAGCN的运行问题

    关于MS-AAGCN的运行问题

    大佬您好!我十分有幸拜读了您的文章《Skeleton-Based Action Recognition with Multi-Stream Adaptive Graph Convolutional Networks》,受益匪浅!我已经跑通了2S-AGCN的代码,想和您请教一下MS-AAGCN的代码该如何运行呢?

    opened by 15762260991 1
  • 注意力模块中参数A的定义

    注意力模块中参数A的定义

    在复现代码时 找不到关于图卷积层中参数A的定义 请问这个A指的是什么呢: class TCN_GCN_unit(nn.Module): def init(self, in_channels, out_channels, A, stride=1, residual=True, adaptive=True, attention=True):

    opened by wangxx0101 1
  • 关于自适应时,tanh和softmax函数的问题

    关于自适应时,tanh和softmax函数的问题

    大佬您好,有两个问题想请教一下。 ①tanh激活函数,它将返回一个范围在[- 1,1]的值,softmax激活函数返回一个[0, 1]的值,当我们建模关节之间的相关性时,如果使用tanh返回为负值的时候,是说明这两个关节负相关吗? ②为什么tanh的效果会比softmax好一点,这个我不是太懂,您可以详细的讲解一下吗?

    opened by blue-q 0
  • Where is the code for visualization in Figure 8 and 9?

    Where is the code for visualization in Figure 8 and 9?

    Dear Authors,

    I have already read your "Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition". In that paper, you showed some experimental results in Figure 8 and 9. I would like to know which part of the code for that. Or, how to use layers to show these result's visualization? If you answer my question, I will really appreciate you. Thank you.

    opened by JasOlean 3
Releases(v0.0)
Owner
LShi
Video Analysis, Action Recognition.
LShi
This code implements constituency parse tree aggregation

README This code implements constituency parse tree aggregation. Folder details code: This folder contains the code that implements constituency parse

Adithya Kulkarni 0 Oct 11, 2021
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

Differentiable Model Compression via Pseudo Quantization Noise DiffQ performs differentiable quantization using pseudo quantization noise. It can auto

Facebook Research 145 Dec 30, 2022
Code and models for "Pano3D: A Holistic Benchmark and a Solid Baseline for 360 Depth Estimation", OmniCV Workshop @ CVPR21.

Pano3D A Holistic Benchmark and a Solid Baseline for 360o Depth Estimation Pano3D is a new benchmark for depth estimation from spherical panoramas. We

Visual Computing Lab, Information Technologies Institute, Centre for Reseach and Technology Hellas 50 Dec 29, 2022
Official Pytorch Code for the paper TransWeather

TransWeather Official Code for the paper TransWeather, Arxiv Tech Report 2021 Paper | Website About this repo: This repo hosts the implentation code,

Jeya Maria Jose 81 Dec 30, 2022
Real-time face detection and emotion/gender classification using fer2013/imdb datasets with a keras CNN model and openCV.

Real-time face detection and emotion/gender classification using fer2013/imdb datasets with a keras CNN model and openCV.

Octavio Arriaga 5.3k Dec 30, 2022
SegTransVAE: Hybrid CNN - Transformer with Regularization for medical image segmentation

SegTransVAE: Hybrid CNN - Transformer with Regularization for medical image segmentation This repo is the official implementation for SegTransVAE. Seg

Nguyen Truong Hai 4 Aug 04, 2022
OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)

OCTIS : Optimizing and Comparing Topic Models is Simple! OCTIS (Optimizing and Comparing Topic models Is Simple) aims at training, analyzing and compa

MIND 478 Jan 01, 2023
Aerial Imagery dataset for fire detection: classification and segmentation (Unmanned Aerial Vehicle (UAV))

Aerial Imagery dataset for fire detection: classification and segmentation using Unmanned Aerial Vehicle (UAV) Title FLAME (Fire Luminosity Airborne-b

79 Jan 06, 2023
EgoNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale

EgonNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale Paper: EgoNN: Egocentric Neural Network for Point Cloud

19 Sep 20, 2022
Predicting Semantic Map Representations from Images with Pyramid Occupancy Networks

This is the code associated with the paper Predicting Semantic Map Representations from Images with Pyramid Occupancy Networks, published at CVPR 2020.

Thomas Roddick 219 Dec 20, 2022
Implementation of Nalbach et al. 2017 paper.

Deep Shading Convolutional Neural Networks for Screen-Space Shading Our project is based on Nalbach et al. 2017 paper. In this project, a set of buffe

Marcel Santana 17 Sep 08, 2022
Near-Duplicate Video Retrieval with Deep Metric Learning

Near-Duplicate Video Retrieval with Deep Metric Learning This repository contains the Tensorflow implementation of the paper Near-Duplicate Video Retr

2 Jan 24, 2022
Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

1 Oct 11, 2021
In this project, we create and implement a deep learning library from scratch.

ARA In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The

22 Aug 23, 2022
Public Implementation of ChIRo from "Learning 3D Representations of Molecular Chirality with Invariance to Bond Rotations"

Learning 3D Representations of Molecular Chirality with Invariance to Bond Rotations This directory contains the model architectures and experimental

35 Dec 05, 2022
Source code to accompany Defunctland's video "FASTPASS: A Complicated Legacy"

Shapeland Simulator Source code to accompany Defunctland's video "FASTPASS: A Complicated Legacy" Download the video at https://www.youtube.com/watch?

TouringPlans.com 70 Dec 14, 2022
CCNet: Criss-Cross Attention for Semantic Segmentation (TPAMI 2020 & ICCV 2019).

CCNet: Criss-Cross Attention for Semantic Segmentation Paper Links: Our most recent TPAMI version with improvements and extensions (Earlier ICCV versi

Zilong Huang 1.3k Dec 27, 2022
Learning Time-Critical Responses for Interactive Character Control

Learning Time-Critical Responses for Interactive Character Control Abstract This code implements the paper Learning Time-Critical Responses for Intera

Movement Research Lab 227 Dec 31, 2022
MiraiML: asynchronous, autonomous and continuous Machine Learning in Python

MiraiML Mirai: future in japanese. MiraiML is an asynchronous engine for continuous & autonomous machine learning, built for real-time usage. Usage In

Arthur Paulino 25 Jul 27, 2022
Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images"

GANInversion_with_ConsecutiveImgs Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images" https://a

QingyangXu 38 Dec 07, 2022