Official Pytorch implementation for AAAI2021 paper (RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning)

Related tags

Deep LearningRSPNet
Overview

RSPNet

Official Pytorch implementation for AAAI2021 paper "RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning"

[Supplementary Materials]

Getting Started

Install Dependencies

All dependencies can be installed using pip:

python -m pip install -r requirements.txt

Our experiments run on Python 3.7 and PyTorch 1.6. Other versions should work but are not tested.

Transcode Videos (Optional)

This step is optional but will increase the data loading speed dramatically.

We decode the videos on the fly while training so we don't need to split frames. This makes disk IO a lot faster but increases CPU usage. This transcode step aims at reducing CPU consumed by decoding by 1) lower video resolution. 2) add more key frames.

To perform transcode, you need to have ffmpeg installed, then run:

python utils/transcode_dataset.py PATH/TO/ORIGIN_VIDEOS PATH/TO/TRANSCODED_VIDEOS

Be warned, this will use all your CPU and will take several hours (on our Intel E5-2630 *2 workstation) to complete.

Prepare Datasets

Your are expected to prepare date for pre-training (Kinetics-400 dataset) and fine-tuning (UCF101, HMDB51 and Something-something-v2 datasets). To let the scripts find datasets on your system, the recommended way is to create symbolic links in ./data directory to the actual path. We found this solution flexible.

The expected directory hierarchy is as follow:

├── data
│   ├── hmdb51
│   │   ├── metafile
│   │   │   ├── brush_hair_test_split1.txt
│   │   │   └── ...
│   │   └── videos
│   │       ├── brush_hair
│   │       │   └── *.avi
│   │       └── ...
│   ├── UCF101
│   │   ├── ucfTrainTestlist
│   │   │   ├── classInd.txt
│   │   │   ├── testlist01.txt
│   │   │   ├── trainlist01.txt
│   │   │   └── ...
│   │   └── UCF-101
│   │       ├── ApplyEyeMakeup
│   │       │   └── *.avi
│   │       └── ...
│   ├── kinetics400
│   │   ├── train_video
│   │   │   ├── answering_questions
│   │   │   │   └── *.mp4
│   │   │   └── ...
│   │   └── val_video
│   │       └── (same as train_video)
│   ├── kinetics100
│   │   └── (same as kinetics400)
│   └── smth-smth-v2
│       ├── 20bn-something-something-v2
│       │   └── *.mp4
│       └── annotations
│           ├── something-something-v2-labels.json
│           ├── something-something-v2-test.json
│           ├── something-something-v2-train.json
│           └── something-something-v2-validation.json
└── ...

Alternatively, you can change the path in config/dataset to match your system.

Build Kinetics-100 dataset (Optional)

Some of our ablation study experiments use the Kinetics-100 dataset for pre-training. This dataset is built by extract 100 classes from Kinetics-400, which has the smallest file size on the train set.

If you have Kinetics-400 available, you can build Kinetics-100 by:

python -m utils.build_kinetics_subset

This script will create symbolic links instead of copy data. It is expected to complete in a minute.

We have included a pre-built one at data/kinetics100_links and created the symbolic link data/kinetics100 that related to it. You need to have data/kinetics400 available at runtime.

Pre-training on Pretext Tasks

Now you have set up the environment. Run the following command to pre-train your models on pretext tasks.

export CUDA_VISIBLE_DEVICES=0,1,2,3
# Architecture: C3D
python pretrain.py -e exps/pretext-c3d -c config/pretrain/c3d.jsonnet
# Architecture: ResNet-18
python pretrain.py -e exps/pretext-resnet18 -c config/pretrain/resnet18.jsonnet
# Architecture: S3D-G
python pretrain.py -e exps/pretext-s3dg -c config/pretrain/s3dg.jsonnet
# Architecture: R(2+1)D
python pretrain.py -e exps/pretext-r2plus1d -c config/pretrain/r2plus1d.jsonnet

You can use kinetics100 dataset for training by editing config/pretrain/moco-train-base.jsonnet (line 13)

Action Recognition

After pre-trained on pretext tasks, these models are fine-tuned to perform action recognition task on UCF101, HMDB51 and Something-something-v2 datasets.

export CUDA_VISIBLE_DEVICES=0,1
# Dataset: UCF101
#     Architecture: C3D [email protected]=76.71%
python finetune.py -c config/finetune/ucf101_c3d.jsonnet \
                   --mc exps/pretext-c3d/model_best.pth.tar \
                   -e exps/ucf101-c3d
#     Architecture: ResNet-18 [email protected]=74.33%
python finetune.py -c config/finetune/ucf101_resnet18.jsonnet \
                   --mc exps/pretext-resnet18/model_best.pth.tar \
                   -e exps/ucf101-resnet18
#     Architecture: S3D-G [email protected]=89.9%
python finetune.py -c config/finetune/ucf101_s3dg.jsonnet \
                   --mc exps/pretext-s3dg/model_best.pth.tar \
                   -e exps/ucf101-s3dg
#     Architecture: R(2+1)D [email protected]=81.1%
python finetune.py -c config/finetune/ucf101_r2plus1d.jsonnet \
                   --mc exps/pretext-r2plus1d/model_best.pth.tar \
                   -e exps/ucf101-r2plus1d

# Dataset: HMDB51
#     Architecture: C3D [email protected]=44.58%
python finetune.py -c config/finetune/hmdb51_c3d.jsonnet \
                   --mc exps/pretext-c3d/model_best.pth.tar \
                   -e exps/hmdb51-c3d
#     Architecture: ResNet-18 [email protected]=41.83%
python finetune.py -c config/finetune/hmdb51_resnet18.jsonnet \
                   --mc exps/pretext-resnet18/model_best.pth.tar \
                   -e exps/hmdb51-resnet18
#     Architecture: S3D-G [email protected]=59.6%
python finetune.py -c config/finetune/hmdb51_s3dg.jsonnet \
                   --mc exps/pretext-s3dg/model_best.pth.tar \
                   -e exps/hmdb51-s3dg
#     Architecture: R(2+1)D [email protected]=44.6%
python finetune.py -c config/finetune/hmdb51_r2plus1d.jsonnet \
                   --mc exps/pretext-r2plus1d/model_best.pth.tar \
                   -e exps/hmdb51-r2plus1d

# Dataset: Something-something-v2
#     Architecture: C3D [email protected]=47.76%
python finetune.py -c config/finetune/smth_smth_c3d.jsonnet \
                   --mc exps/pretext-c3d/model_best.pth.tar \
                   -e exps/smthv2-c3d
#     Architecture: ResNet-18 [email protected]=44.02%
python finetune.py -c config/finetune/smth_smth_resnet18.jsonnet \
                   --mc exps/pretext-resnet18/model_best.pth.tar \
                   -e exps/smthv2-resnet18
#     Architecture: S3D-G [email protected]=55.03%
python finetune.py -c config/finetune/smth_smth_s3dg.jsonnet \
                   --mc exps/pretext-s3dg/model_best.pth.tar \
                   -e exps/smthv2-s3dg

Results and Pre-trained Models

Architecture Pre-trained dataset Pre-training epoch Pre-trained model Acc. on UCF101 Acc. on HMDB51
S3D-G Kinetics-400 1000 Download link 93.7 64.7
S3D-G Kinetics-400 200 Download link 89.9 59.6
R(2+1)D Kinetics-400 200 Download link 81.1 44.6
ResNet-18 Kinetics-400 200 Download link 74.3 41.8
C3D Kinetics-400 200 Download link 76.7 44.6

Video Retrieval

The pretrained model can also be used in searching relevant videos based on the given query video.

export CUDA_VISIBLE_DEVICES=0 # use single GPU 
python retrieval.py -c config/retrieval/ucf101_resnet18.jsonnet \
                    --mc exps/pretext-resnet18/model_best.pth.tar \
                    -e exps/retrieval-resnet18    

The video retrieval result in our paper

Architecture k=1 k=5 k=10 k=20 k=50
C3D 36.0 56.7 66.5 76.3 87.7
ResNet-18 41.1 59.4 68.4 77.8 88.7

Visualization

We further visualize the region of interest (RoI) that contributes most to the similarity score using the class activation map (CAM) technique.

export CUDA_VISIBLE_DEVICES=0,1
python visualization.py -c config/pretrain/s3dg.jsonnet \
                        --load-model exps/pretext-s3dg/model_best.pth.tar \
                        -e exps/visual-s3dg \
                        -x '{batch_size: 1}'

The cam visualization results will be plotted in png files like

Troubleshoot

  • DECORDError cannot find video stream with wanted index: -1

    Some video from Kinetics dataset does not contain a valid video stream for some unknown reason. To filter them out, run python utils/verify_video.py PATH/TO/VIDEOS, then copy the output to the blacklist config in config/dataset/kinetics{400,100}.libsonnet. You need to have ffmpeg installed.

Citation

Please cite the following paper if you feel RSPNet useful to your research

@InProceedings{chen2020RSPNet,
author = {Peihao Chen, Deng Huang, Dongliang He, Xiang Long, Runhao Zeng, Shilei Wen, Mingkui Tan, and Chuang Gan},
title = {RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning},
booktitle = {The AAAI Conference on Artificial Intelligence (AAAI)},
year = {2021}
}

Contact

For any question, please file an issue or contact

Peihao Chen: [email protected]
Deng Huang: [email protected]
Comments
  • r(2+1) d -18 pretrained model not fully reproducible

    r(2+1) d -18 pretrained model not fully reproducible

    Hi, I finetuned the given pre-trained r(2+1)d model on ucf-101 using the given finetuning code. It only achieves (76 -77%) accuracy. Can you confirm if the given model is the correct one. I use the same setup as mentioned in the readme.

    opened by fmthoker 3
  • framework image

    framework image

    hello, thank you for your great work. it's so smart idea!

    can you explain about framework image? i understand about RSP task, A-VID task is learned in 1 iteration. i think that it means 'anchor is same'. and i saw the algorithm, just sampling K clips in video V\v+, however, in paper fig 2. two clips in video, 1x clip and 2x clip 's features(green color) are going to g_a header and do contrastive learning. i think about you want to show us randomly selected speed.... is right? in real experiment, just c_i, c_j, {c_n}(K) clips in there? not 2K?

    thank you

    opened by youwantsy 2
  • The pre-training model of s3d-g model based on Imagenet and dynamics-400 data set?

    The pre-training model of s3d-g model based on Imagenet and dynamics-400 data set?

    Where can I download the pre training model of s3d-g model based on Imagenet and dynamics-400 dataset? Or can you upload it to this repository? 请问哪里可以下载到基于ImageNet和Kinetics-400数据集的S3D-G模型的预训练模型?或者请问作者可以上传一下公开吗?

    opened by LiangSiyv 2
  • Question about computational resources

    Question about computational resources

    Hi, Thanks for your wonderful paper and code. I want to know the computational resources of your experiments. 1. What and how many GPUs you use? 2. The training time of pretraining on K400 for 200 epochs. 3. The training time of finetuning on UCF101, HMDB51, Something-V2, respectively. Looking forward to your reply. Thanks.

    opened by wjn922 2
  • 'No configuration setting found for key force_n_crop'

    'No configuration setting found for key force_n_crop'

    I downloaded your S3D-G pre-trained model for my action recognition task on UCF101 but I keep getting this error:

    argument type: <class 'str'> Setting ulimit -n 8192 world_size=1 Using dist_url=tcp://127.0.0.1:36879 Local Rank: 0 2021-12-30 07:31:39,148|INFO |Args = Args(parser=None, config='config/finetune/ucf101_s3dg.jsonnet', ext_config=[], debug=False, experiment_dir=PosixPath('exps/ucf101-s3dg'), _run_dir=PosixPath('exps/ucf101-s3dg/run_2_20211230_073138'), load_checkpoint=None, load_model=None, validate=False, moco_checkpoint='exps/pretext-s3dg/model_best_s3dg_200epoch.pth.tar', seed=None, world_size=1, _continue=False, no_scale_lr=False) 2021-12-30 07:31:39,149|INFO |cudnn.benchmark = True 2021-12-30 07:31:39,278|INFO |Config = batch_size = 4 dataset { annotation_path = "data/UCF101/ucfTrainTestlist" fold = 1 mean = [ 0.485 0.456 0.406 ] name = "ucf101" num_classes = 101 root = "data/UCF101/UCF-101" std = [ 0.229 0.224 0.225 ] } final_validate { batch_size = 4 } log_interval = 10 method = "from-scratch" model { arch = "s3dg" } model_type = "multitask" num_epochs = 50 num_workers = 8 optimizer { dampening = 0 lr = 0.005 milestones = [ 50 100 150 ] momentum = 0.9 nesterov = false patience = 10 schedule = "cosine" weight_decay = 0.0001 } spatial_transforms { color_jitter { brightness = 0 contrast = 0 hue = 0 saturation = 0 } crop_area { max = 1 min = 0.25 } gray_scale = 0 size = 224 } temporal_transforms { frame_rate = 25 size = 64 strides = [ { stride = 1 weight = 1 } ] validate { final_n_crop = 10 n_crop = 1 stride = 1 } } validate { batch_size = 4 } 2021-12-30 07:31:39,282|INFO |Using global get_model_class({'arch': 's3dg'}) 2021-12-30 07:31:39,283|INFO |Using MultiTask Wrapper 2021-12-30 07:31:39,283|WARNING |<class 'moco.split_wrapper.MultiTaskWrapper'> using groups: 1 2021-12-30 07:31:39,383|INFO |Found fc: fc with in_features: 1024 2021-12-30 07:31:42,488|INFO |Building Dataset: VID: False, Split=train 2021-12-30 07:31:42,488|INFO |Temporal transform type: clip Traceback (most recent call last): File "finetune.py", line 502, in main() File "finetune.py", line 498, in main mp.spawn(main_worker, args=(args, dist_url,), nprocs=args.world_size) File "/home/ubuntu/anaconda3/envs/ucf101/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/ubuntu/anaconda3/envs/ucf101/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes while not context.join(): File "/home/ubuntu/anaconda3/envs/ucf101/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 119, in join raise Exception(msg) Exception:

    -- Process 0 terminated with the following error: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/ucf101/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap fn(i, *args) File "/home/ubuntu/RSPNet/finetune.py", line 452, in main_worker engine = Engine(args, cfg, local_rank=local_rank) File "/home/ubuntu/RSPNet/finetune.py", line 171, in init self.train_loader = self.data_loader_factory.build( File "/home/ubuntu/RSPNet/datasets/classification/init.py", line 81, in build temporal_transform = self.get_temporal_transform(split) File "/home/ubuntu/RSPNet/datasets/classification/init.py", line 276, in get_temporal_transform if tt_cfg.get_bool("force_n_crop"): File "/home/ubuntu/anaconda3/envs/ucf101/lib/python3.8/site-packages/pyhocon/config_tree.py", line 310, in get_bool string_value = self.get_string(key, default) File "/home/ubuntu/anaconda3/envs/ucf101/lib/python3.8/site-packages/pyhocon/config_tree.py", line 221, in get_string value = self.get(key, default) File "/home/ubuntu/anaconda3/envs/ucf101/lib/python3.8/site-packages/pyhocon/config_tree.py", line 209, in get return self._get(ConfigTree.parse_key(key), 0, default) File "/home/ubuntu/anaconda3/envs/ucf101/lib/python3.8/site-packages/pyhocon/config_tree.py", line 151, in _get raise ConfigMissingException(u"No configuration setting found for key {key}".format(key='.'.join(key_path[:key_index + 1]))) pyhocon.exceptions.ConfigMissingException: 'No configuration setting found for key force_n_crop'

    opened by aloma85 0
Releases(pretrained_model)
Adversarial Attacks are Reversible via Natural Supervision

Adversarial Attacks are Reversible via Natural Supervision ICCV2021 Citation @InProceedings{Mao_2021_ICCV, author = {Mao, Chengzhi and Chiquier

Computer Vision Lab at Columbia University 20 May 22, 2022
Losslandscapetaxonomy - Taxonomizing local versus global structure in neural network loss landscapes

Taxonomizing local versus global structure in neural network loss landscapes Int

Yaoqing Yang 8 Dec 30, 2022
IMBENS: class-imbalanced ensemble learning in Python.

IMBENS: class-imbalanced ensemble learning in Python. Links: [Documentation] [Gallery] [PyPI] [Changelog] [Source] [Download] [知乎/Zhihu] [中文README] [a

Zhining Liu 176 Jan 04, 2023
The aim of the game, as in the original one, is to find a specific image from a group of different images of a person's face

GUESS WHO Main Links: [Github] [App] Related Links: [CLIP] [Celeba] The aim of the game, as in the original one, is to find a specific image from a gr

Arnau - DIMAI 3 Jan 04, 2022
Data, notebooks, and articles associated with the RSNA AI Deep Learning Lab at RSNA 2021

RSNA AI Deep Learning Lab 2021 Intro Welcome Deep Learners! This document provides all the information you need to participate in the RSNA AI Deep Lea

RSNA 65 Dec 16, 2022
TRACER: Extreme Attention Guided Salient Object Tracing Network implementation in PyTorch

TRACER: Extreme Attention Guided Salient Object Tracing Network This paper was accepted at AAAI 2022 SA poster session. Datasets All datasets are avai

Karel 118 Dec 29, 2022
Pytorch implement of 'Unmixing based PAN guided fusion network for hyperspectral imagery'

Pgnet There's a improved version compared with the publication in Tgrs with the modification in the deduction of the PDIN block: https://arxiv.org/abs

5 Jul 01, 2022
Collective Multi-type Entity Alignment Between Knowledge Graphs (WWW'20)

CG-MuAlign A reference implementation for "Collective Multi-type Entity Alignment Between Knowledge Graphs", published in WWW 2020. If you find our pa

Bran Zhu 28 Dec 11, 2022
Real-Time Seizure Detection using EEG: A Comprehensive Comparison of Recent Approaches under a Realistic Setting

Real-Time Seizure Detection using Electroencephalogram (EEG) This is the repository for "Real-Time Seizure Detection using EEG: A Comprehensive Compar

AITRICS 30 Dec 17, 2022
Manim is an engine for precise programmatic animations, designed for creating explanatory math videos

Manim is an engine for precise programmatic animations, designed for creating explanatory math videos. Note, there are two versions of manim. This rep

Grant Sanderson 49k Jan 09, 2023
An implementation for `Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction`

Text2Event An implementation for Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction Please contact Yaojie Lu (@

Roger 153 Jan 07, 2023
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation

AttentionGAN-v2 for Unpaired Image-to-Image Translation AttentionGAN-v2 Framework The proposed generator learns both foreground and background attenti

Hao Tang 530 Dec 27, 2022
Patch-Diffusion Code (AAAI2022)

Patch-Diffusion This is an official PyTorch implementation of "Patch Diffusion: A General Module for Face Manipulation Detection" in AAAI2022. Require

H 7 Nov 02, 2022
Dogs classification with Deep Metric Learning using some popular losses

Tsinghua Dogs classification with Deep Metric Learning 1. Introduction Tsinghua Dogs dataset Tsinghua Dogs is a fine-grained classification dataset fo

QuocThangNguyen 45 Nov 09, 2022
This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark

SILG This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark. If you find this work helpful, please cons

Victor Zhong 17 Nov 27, 2022
JAX code for the paper "Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation"

Optimal Model Design for Reinforcement Learning This repository contains JAX code for the paper Control-Oriented Model-Based Reinforcement Learning wi

Evgenii Nikishin 43 Sep 28, 2022
Using pretrained GROVER to extract the atomic fingerprints from molecule

Extracting atomic fingerprints from molecules using pretrained Graph Neural Network models (GROVER).

Xuan Vu Nguyen 1 Jan 28, 2022
Vanilla and Prototypical Networks with Random Weights for image classification on Omniglot and mini-ImageNet. Made with Python3.

vanilla-rw-protonets-project Vanilla Prototypical Networks and PNs with Random Weights for image classification on Omniglot and mini-ImageNet. Made wi

Giovani Candido 8 Aug 31, 2022
A rule-based log analyzer & filter

Flog 一个根据规则集来处理文本日志的工具。 前言 在日常开发过程中,由于缺乏必要的日志规范,导致很多人乱打一通,一个日志文件夹解压缩后往往有几十万行。 日志泛滥会导致信息密度骤减,给排查问题带来了不小的麻烦。 以前都是用grep之类的工具先挑选出有用的,再逐条进行排查,费时费力。在忍无可忍之后决

上山打老虎 9 Jun 23, 2022