Background-Click Supervision for Temporal Action Localization

Related tags

Deep LearningBackTAL
Overview

Background-Click Supervision for Temporal Action Localization

This repository is the official implementation of BackTAL. In this work, we study the temporal action localization under background-click supervision, and find the performance bottleneck of the existing approaches mainly comes from the background errors. Thus, we convert existing action-click supervision to the background-click supervision and develop a novel method, called BackTAL. Extensive experiments on three benchmarks are conducted, which demonstrate the high performance of the established BackTAL and the rationality of the proposed background-click supervision.

Illustrating the architecture of the proposed BackTAL

Requirements

To install requirements:

conda env create -f environment.yaml

Data Preparation

Download

Download pre-extracted I3D features of Thumos14, ActivityNet1.2 and HACS dataset from BaiduYun with code back.

Please ensure the data structure is as below
├── data
   └── Thumos14
       ├── val
           ├── video_validation_0000051.npz
           ├── video_validation_0000052.npz
           └── ...
       └── test
           ├── video_test_0000004.npz
           ├── video_test_0000006.npz
           └── ...
   └── ActivityNet1.2
       ├── training
           ├── v___dXUJsj3yo.npz
           ├── v___wPHayoMgw.npz
           └── ...
       └── validation
           ├── v__3I4nm2zF5Y.npz
           ├── v__8KsVaJLOYI.npz
           └── ...
   └── HACS
       ├── training
           ├── v_0095rqic1n8.npz
           ├── v_62VWugDz1MY.npz
           └── ...
       └── validation
           ├── v_008gY2B8Pf4.npz
           ├── v_00BcXeG1gC0.npz
           └── ...
     

Background-Click Annotations

The raw annotations of THUMOS14 dataset are under directory './data/THUMOS14/human_anns'.

Evaluation

Pre-trained Models

You can download checkpoints for Thumos14, ActivityNet1.2 and HACS dataset from BaiduYun with code back. These models are trained on Thumos14, ActivityNet1.2 or HACS using the configuration file under the directory "./experiments/". Please put these checkpoints under directory "./checkpoints".

Evaluation

Before running the code, please activate the conda environment.

To evaluate BackTAL model on Thumos14, run:

cd ./tools
python eval.py -dataset THUMOS14 -weight_file ../checkpoints/THUMOS14.pth

To evaluate BackTAL model on ActivityNet1.2, run:

cd ./tools
python eval.py -dataset ActivityNet1.2 -weight_file ../checkpoints/ActivityNet1.2.pth

To evaluate BackTAL model on HACS, run:

cd ./tools
python eval.py -dataset HACS -weight_file ../checkpoints/HACS.pth

Results

Our model achieves the following performance:

THUMOS14

threshold 0.3 0.4 0.5 0.6 0.7
mAP 54.4 45.5 36.3 26.2 14.8

ActivityNet v1.2

threshold average-mAP 0.50 0.75 0.95
mAP 27.0 41.5 27.3 4.7

HACS

threshold average-mAP 0.50 0.75 0.95
mAP 20.0 31.5 19.5 4.7

Training

To train the BackTAL model on THUMOS14 dataset, please run this command:

cd ./tools
python train.py -dataset THUMOS14

To train the BackTAL model on ActivityNet v1.2 dataset, please run this command:

cd ./tools
python train.py -dataset ActivityNet1.2

To train the BackTAL model on HACS dataset, please run this command:

cd ./tools
python train.py -dataset HACS

Citing BackTAL

@article{yang2021background,
  title={Background-Click Supervision for Temporal Action Localization},
  author={Yang, Le and Han, Junwei and Zhao, Tao and Lin, Tianwei and Zhang, Dingwen and Chen, Jianxin},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
  publisher={IEEE}
}

Contact

For any discussions, please contact [email protected].

Owner
LeYang
LeYang
🔥 Cogitare - A Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python

Cogitare is a Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python. A friendly interface for beginners and a powerful too

Cogitare - Modern and Easy Deep Learning with Python 76 Sep 30, 2022
Wikidated : An Evolving Knowledge Graph Dataset of Wikidata’s Revision History

Wikidated Wikidated 1.0 is a dataset of Wikidata’s full revision history, which encodes changes between Wikidata revisions as sets of deletions and ad

Lukas Schmelzeisen 11 Aug 16, 2022
Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations"

Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations" this repository is maintained by bo

Yuhan Liu 24 Nov 29, 2022
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

OxCSML (Oxford Computational Statistics and Machine Learning) 50 Dec 28, 2022
Pytorch Implementation for CVPR2018 Paper: Learning to Compare: Relation Network for Few-Shot Learning

LearningToCompare Pytorch Implementation for Paper: Learning to Compare: Relation Network for Few-Shot Learning Howto download mini-imagenet and make

Jackie Loong 246 Dec 19, 2022
code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification

On Robust Prefix-Tuning for Text Classification Prefix-tuning has drawed much attention as it is a parameter-efficient and modular alternative to adap

Zonghan Yang 12 Nov 30, 2022
Group-Free 3D Object Detection via Transformers

Group-Free 3D Object Detection via Transformers By Ze Liu, Zheng Zhang, Yue Cao, Han Hu, Xin Tong. This repo is the official implementation of "Group-

Ze Liu 213 Dec 07, 2022
Vis2Mesh: Efficient Mesh Reconstruction from Unstructured Point Clouds of Large Scenes with Learned Virtual View Visibility ICCV2021

Vis2Mesh This is the offical repository of the paper: Vis2Mesh: Efficient Mesh Reconstruction from Unstructured Point Clouds of Large Scenes with Lear

71 Dec 25, 2022
UFPR-ADMR-v2 Dataset

UFPR-ADMR-v2 Dataset The UFPR-ADMRv2 dataset contains 5,000 dial meter images obtained on-site by employees of the Energy Company of Paraná (Copel), w

Gabriel Salomon 8 Sep 29, 2022
EmoTag helps you train emotion detection model for Chinese audios

emoTag emoTag helps you train emotion detection model for Chinese audios. Environment pip install -r requirement.txt Data We used Emotional Speech Dat

_zza 4 Sep 07, 2022
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.

Bottom-Up and Top-Down Attention for Visual Question Answering An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge. The

Hengyuan Hu 731 Jan 03, 2023
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers

hierarchical-transformer-1d Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers In Progress!! 2021.

MyungHoon Jin 7 Nov 06, 2022
OptaPlanner wrappers for Python. Currently significantly slower than OptaPlanner in Java or Kotlin.

OptaPy is an AI constraint solver for Python to optimize the Vehicle Routing Problem, Employee Rostering, Maintenance Scheduling, Task Assignment, School Timetabling, Cloud Optimization, Conference S

OptaPy 211 Jan 02, 2023
LSTM Neural Networks for Spectroscopic Studies of Type Ia Supernovae

Package Description The difficulties in acquiring spectroscopic data have been a major challenge for supernova surveys. snlstm is developed to provide

7 Oct 11, 2022
Official implementation for the paper "Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection"

Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection PyTorch code release of the paper "Attentive Prototypes for Sour

Deepti Hegde 23 Oct 17, 2022
Codes of the paper Deformable Butterfly: A Highly Structured and Sparse Linear Transform.

Deformable Butterfly: A Highly Structured and Sparse Linear Transform DeBut Advantages DeBut generalizes the square power of two butterfly factor matr

Rui LIN 8 Jun 10, 2022
This repo contains the pytorch implementation for Dynamic Concept Learner (accepted by ICLR 2021).

DCL-PyTorch Pytorch implementation for the Dynamic Concept Learner (DCL). More details can be found at the project page. Framework Grounding Physical

Zhenfang Chen 31 Jan 06, 2023
A complete, self-contained example for training ImageNet at state-of-the-art speed with FFCV

ffcv ImageNet Training A minimal, single-file PyTorch ImageNet training script designed for hackability. Run train_imagenet.py to get... ...high accur

FFCV 92 Dec 31, 2022
Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network

DeepCDR Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network This work has been accepted to ECCB2020 and was also published in the

Qiao Liu 50 Dec 18, 2022
Official Python implementation of the 'Sparse deconvolution'-v0.3.0

Sparse deconvolution Python v0.3.0 Official Python implementation of the 'Sparse deconvolution', and the CPU (NumPy) and GPU (CuPy) calculation backen

Weisong Zhao 23 Dec 28, 2022