Official PyTorch implementation for paper "Efficient Two-Stage Detection of Human–Object Interactions with a Novel Unary–Pairwise Transformer"

Overview

UPT: Unary–Pairwise Transformers

PWC PWC

This repository contains the official PyTorch implementation for the paper

Frederic Z. Zhang, Dylan Campbell and Stephen Gould. Efficient Two-Stage Detection of Human–Object Interactions with a Novel Unary–Pairwise Transformer. arXiv preprint arXiv:2112.01838.

[project page] [preprint]

Abstract

...
However, the success of such one-stage HOI detectors can largely be attributed to the representation power of transformers. We discovered that when equipped with the same transformer, their two-stage counterparts can be more performant and memory-efficient, while taking a fraction of the time to train. In this work, we propose the Unary–Pairwise Transformer, a two-stage detector that exploits unary and pairwise representa-tions for HOIs. We observe that the unary and pairwise parts of our transformer network specialise, with the former preferentially increasing the scores of positive examples and the latter decreasing the scores of negative examples. We evaluate our method on the HICO-DET and V-COCO datasets, and significantly outperform state-of-the-art approaches. At inference time, our model with ResNet50 approaches real-time performance on a single GPU.

Demonstration on data in the wild

Model Zoo

We provide weights for UPT models pre-trained on HICO-DET and V-COCO for potential downstream applications. In addition, we also provide weights for fine-tuned DETR models to facilitate reproducibility. To attempt fine-tuning the DETR model yourself, refer to this repository.

Model Dataset Default Settings Inference UPT Weights DETR Weights
UPT-R50 HICO-DET (31.66, 25.94, 33.36) 0.042s weights weights
UPT-R101 HICO-DET (32.31, 28.55, 33.44) 0.061s weights weights
UPT-R101-DC5 HICO-DET (32.62, 28.62, 33.81) 0.124s weights weights
Model Dataset Scenario 1 Scenario 2 Inference UPT Weights DETR Weights
UPT-R50 V-COCO 59.0 64.5 0.043s weights weights
UPT-R101 V-COCO 60.7 66.2 0.064s weights weights
UPT-R101-DC5 V-COCO 61.3 67.1 0.131s weights weights

The inference speed was benchmarked on a GeForce RTX 3090. Note that weights of the UPT model include those of the detector (DETR). You do not need to download the DETR weights, unless you want to train the UPT model from scratch. Training UPT-R50 with 8 GeForce GTX TITAN X GPUs takes around 5 hours on HICO-DET and 40 minutes on V-COCO, almost a tenth of the time compared to other one-stage models such as QPIC.

Contact

For general inquiries regarding the paper and code, please post them in Discussions. For bug reports and feature requests, please post them in Issues. You can also contact me at [email protected].

Prerequisites

  1. Install the lightweight deep learning library Pocket. The recommended PyTorch version is 1.9.0.
  2. Download the repository and the submodules.
git clone https://github.com/fredzzhang/upt.git
git submodule init
git submodule update
  1. Prepare the HICO-DET dataset.
    1. If you have not downloaded the dataset before, run the following script.
    cd /path/to/upt/hicodet
    bash download.sh
    1. If you have previously downloaded the dataset, simply create a soft link.
    cd /path/to/upt/hicodet
    ln -s /path/to/hicodet_20160224_det ./hico_20160224_det
  2. Prepare the V-COCO dataset (contained in MS COCO).
    1. If you have not downloaded the dataset before, run the following script
    cd /path/to/upt/vcoco
    bash download.sh
    1. If you have previously downloaded the dataset, simply create a soft link
    cd /path/to/upt/vcoco
    ln -s /path/to/coco ./mscoco2014

License

UPT is released under the BSD-3-Clause License.

Inference

We have implemented inference utilities with different visualisation options. Provided you have downloaded the model weights to checkpoints/, run the following command to visualise detected instances together with the attention maps from the cooperative and competitive layers. Use the flag --index to select images, and --box-score-thresh to modify the filtering threshold on object boxes.

python inference.py --resume checkpoints/upt-r50-hicodet.pt --index 8789

Here is the sample output. Note that we manually selected some informative attention maps to display. The predicted scores for each action will be printed by the script as well.

To select the V-COCO dataset and V-COCO models, use the flag --dataset vcoco, and then load the corresponding weights. To visualise interactive human-object pairs for a particular action class, use the flag --action to specify the action index. Here is a lookup table for the action indices.

Additionally, to cater for different needs, we implemented an option to run inference on custom images, using the flag --image-path. The following is an example for interaction holding an umbrella.

python inference.py --resume checkpoints/upt-r50-hicodet.pt --image-path ./assets/umbrella.jpeg --action 36

Training and Testing

Refer to launch_template.sh for training and testing commands with different options. To train the UPT model from scratch, you need to download the weights for the corresponding DETR model, and place them under /path/to/upt/checkpoints/. Adjust --world-size based on the number of GPUs available.

To test the UPT model on HICO-DET, you can either use the Python utilities we implemented or the Matlab utilities provided by Chao et al.. For V-COCO, we did not implement evaluation utilities, and instead use the utilities provided by Gupta et al.. Refer to these instructions for more details.

Citation

If you find our work useful for your research, please consider citing us:

@article{zhang2021upt,
  author    = {Frederic Z. Zhang and Dylan Campbell and Stephen Gould},
  title     = {Efficient Two-Stage Detection of Human-Object Interactions with a Novel Unary-Pairwise Transformer},
  journal   = {arXiv preprint arXiv:2112.01838},
  year      = {2021}
}

@inproceedings{zhang2021scg,
  author    = {Frederic Z. Zhang, Dylan Campbell and Stephen Gould},
  title     = {Spatially Conditioned Graphs for Detecting Human–Object Interactions},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  month     = {October},
  year      = {2021},
  pages     = {13319-13327}
}
Comments
  • error when test vcoco

    error when test vcoco

    I use python main.py --cache --dataset vcoco --data-root vcoco/ --partitions trainval test --output-dir vcoco-r50 --resume checkpoints/upt-r50-vcoco.pt to generate cache.pkl. But report a error when eval it.

    The eval code is:

    from vsrl_eval import VCOCOeval
    
    vsrl_annot_file = 'data/vcoco/vcoco_val.json'
    coco_file = 'data/instances_vcoco_all_2014.json'
    split_file = 'data/splits/vcoco_val.ids'
    
    vcocoeval = VCOCOeval(vsrl_annot_file, coco_file, split_file)
    
    det_file = '/media/ming-t/Deng/relation_mppe/HOI-UPT/vcoco-r50/cache.pkl'
    vcocoeval._do_eval(det_file, ovr_thresh=0.5)
    

    The error is:

    loading annotations into memory...
    Done (t=0.74s)
    creating index...
    index created!
    loading vcoco annotations...
    Traceback (most recent call last):
      File "test.py", line 14, in <module>
        vcocoeval._do_eval(det_file, ovr_thresh=0.5)
      File "/media/ming-t/Deng/relation_mppe/HOI-UPT/lib/vcoco/vsrl_eval.py", line 194, in _do_eval
        self._do_agent_eval(vcocodb, detections_file, ovr_thresh=ovr_thresh)
      File "/media/ming-t/Deng/relation_mppe/HOI-UPT/lib/vcoco/vsrl_eval.py", line 417, in _do_agent_eval
        assert(np.amax(rec) <= 1)
      File "<__array_function__ internals>", line 180, in amax
      File "/home/ming-t/anaconda3/envs/pocket/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 2793, in amax
        return _wrapreduction(a, np.maximum, 'max', axis, None, out,
      File "/home/ming-t/anaconda3/envs/pocket/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
        return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
    ValueError: zero-size array to reduction operation maximum which has no identity
    

    How to solve it?

    opened by leijue222 14
  • The HOI loss is NaN for rank 0

    The HOI loss is NaN for rank 0

    Dir sir, I followed with readme to build this UPT network,but when i use the instruction python main.py --world-size 1 --dataset vcoco --data-root ./v-coco --partitions trainval test --pretrained ../detr-r50-vcoco.pth --output-dir ./upt-r50-vcoco.pt

    i got an error

    `Traceback (most recent call last): File "main.py", line 208, in mp.spawn(main, nprocs=args.world_size, args=(args,)) File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes while not context.join(): File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException:

    -- Process 0 terminated with the following error: Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/root/autodl-tmp/upload/main.py", line 125, in main engine(args.epochs) File "/root/pocket/pocket/pocket/core/distributed.py", line 139, in call self._on_each_iteration() File "/root/autodl-tmp/upload/utils.py", line 138, in _on_each_iteration raise ValueError(f"The HOI loss is NaN for rank {self._rank}") ValueError: The HOI loss is NaN for rank 0`

    I tried to train without pretrain model it works the same error.I tried to print the loss but it shown an empty tensor.As a beginner , i have no idea what it happened.If you could give me any help,i would be appreciated. I look forward to receiving your reply.Thank you for a lot.

    Inactive 
    opened by OBVIOUSDAWN 11
  • Generate the results on the friends.gif

    Generate the results on the friends.gif

    Hello! Thank you for this amazing work! I am curious to know how you got the inference results showing the names of the objects and the activities on the demo_friends.gif. Can you please tell how you achieved that? Thanks in advance.

    opened by Andre1998Shuvam 8
  • Predicted object class instead of just number?

    Predicted object class instead of just number?

    Hi,

    thanks for the amazing work! I would like to ask where can we see the predicted object class of each bounding box, or the prediction result for a triplet form? Thank you!

    enhancement 
    opened by xiaoxiaoczw 7
  • code bug?

    code bug?

    opened by ltttpku 6
  • There is still a problem.

    There is still a problem.

    Traceback (most recent call last): File "inference.py", line 225, in main(args) File "C:\Users\User\anaconda3\envs\colab\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "inference.py", line 150, in main upt = build_detector(args, conversion) File "C:\Users\User\PycharmProjects\hoi\UPT\upt.py", line 276, in build_detector detr.backbone[0].num_channels, File "C:\Users\User\anaconda3\envs\colab\lib\site-packages\torch\nn\modules\module.py", line 1207, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'DETRsegm' object has no attribute 'backbone'


    I changed the torch version and tried it in the collab environment, but the problem still occurs in the same place.

    If possible, can you tell me all libraries using "pip freeze > requirements.txt"?

    If it is not possible to disclose it externally, I would appreciate it if you could send it to [email protected].

    opened by ghzmwhdk777 4
  • Here is problem

    Here is problem

    Traceback (most recent call last): File "inference.py", line 225, in main(args) File "C:\Users\User\anaconda3\envs\colab\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "inference.py", line 150, in main upt = build_detector(args, conversion) File "C:\Users\User\PycharmProjects\hoi\UPT\upt.py", line 268, in build_detector detr, , postprocessors = build_model(args) File "C:\Users\User\PycharmProjects\hoi\UPT\detr\models_init.py", line 6, in build_model return build(args) File "C:\Users\User\PycharmProjects\hoi\UPT\detr\models\detr.py", line 313, in build num_classes = 20 if args.dataset_file != 'coco' else 91 AttributeError: 'Namespace' object has no attribute 'dataset_file'

    opened by ghzmwhdk777 4
  • How about training a DETR model on the VCOCO dataset

    How about training a DETR model on the VCOCO dataset

    Thank you for your excellent work. you have provided a tutorial on training DETR models on the HICO-DET dataset, could you tell us how you trained the DETR on the VCOCO dataset?

    moved to discussion 
    opened by ddwhzh 4
  • confused about the vcoco dataset

    confused about the vcoco dataset

    There're some cool properties of VCOCO dataset you implemented: "object_to_action" gives me the list of actions for each object, i.e. {1: [0, 3, 11, 15], 2: [0, 1, 2, 3, 11], ......} "objects" return the list of objects, i.e. ['background', 'person', 'bicycle', .......] "actions" return the list of actions, i.e. ['hold obj', 'sit instr', 'ride instr', .......]

    However, I'm confused about the relationships among them:

    1. Which object does the key 1 of "1: [0, 3, 11, 15]", which is the first item of object_to_action, represent?
    2. Which action does the values [0, 3, 11, 15] of "1: [0, 3, 11, 15]" represent?

    According to the List of actions and objects, Actions 0, 3, 11, 15 represent hold obj, look obj, carry obj, cut obj respectively while Object 1 represent person, which appears to be weird.

    question moved to discussion 
    opened by ltttpku 3
  • train

    train

    python main.py --world-size 1 --pretrained checkpoints/detr-r50-hicodet.pth --output-dir checkpoints/upt-r50-hicodet

    raise ValueError(f"The HOI loss is NaN for rank {self._rank}") ValueError: The HOI loss is NaN for rank 0

    opened by wangjunbianqiang 2
  • Multiple loss training code

    Multiple loss training code

    Hi, @fredzzhang :

    I want to try training with multiple losses. I found the relevant code. I added a loss, which is running and no error is reported.

    but I want to successfully train multiple loss and set the hyperparameters of loss, how do I do it?

    if self.training:

            interaction_loss = self.compute_interaction_loss(boxes, bh, bo, logits, prior, targets, pairwise_tokens_x_collated)
            interaction_x_loss = self.compute_interaction_x_loss(boxes, bh, bo, logits, prior, targets, pairwise_tokens_x_collated)
            loss_dict = dict(
                interaction_loss=interaction_loss,
                interaction_x_loss = interaction_x_loss
            )
            return loss_dict
    

    def _on_each_iteration(self):

        loss_dict = self._state.net(
            *self._state.inputs, targets=self._state.targets)
        if loss_dict['interaction_loss'].isnan():
            raise ValueError(f"The HOI loss is NaN for rank {self._rank}")
    
        self._state.loss = sum(loss for loss in loss_dict.values())
        self._state.optimizer.zero_grad(set_to_none=True)
        self._state.loss.backward()
        if self.max_norm > 0:
            torch.nn.utils.clip_grad_norm_(self._state.net.parameters(), self.max_norm)
        self._state.optimizer.step()
    

    yaoyaosanqi.

    opened by yaoyaosanqi 2
Releases(v1.0)
Owner
Frederic Zhang
PhD researcher, photographer, substandard musician but a linguistic genius
Frederic Zhang
Code for Emergent Translation in Multi-Agent Communication

Emergent Translation in Multi-Agent Communication PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Comm

Facebook Research 75 Jul 15, 2022
A transformer-based method for Healthcare Image Captioning in Vietnamese

vieCap4H Challenge 2021: A transformer-based method for Healthcare Image Captioning in Vietnamese This repo GitHub contains our solution for vieCap4H

Doanh B C 4 May 05, 2022
BEAMetrics: Benchmark to Evaluate Automatic Metrics in Natural Language Generation

BEAMetrics: Benchmark to Evaluate Automatic Metrics in Natural Language Generation Installing The Dependencies $ conda create --name beametrics python

7 Jul 04, 2022
Source code of all the projects of Udacity Self-Driving Car Engineer Nanodegree.

self-driving-car In this repository I will share the source code of all the projects of Udacity Self-Driving Car Engineer Nanodegree. Hope this might

Andrea Palazzi 2.4k Dec 29, 2022
a reimplementation of UnFlow in PyTorch that matches the official TensorFlow version

pytorch-unflow This is a personal reimplementation of UnFlow [1] using PyTorch. Should you be making use of this work, please cite the paper according

Simon Niklaus 134 Nov 20, 2022
NLMpy - A Python package to create neutral landscape models

NLMpy is a Python package for the creation of neutral landscape models that are widely used by landscape ecologists to model ecological patterns

Manaaki Whenua – Landcare Research 1 Oct 08, 2022
Deep functional residue identification

DeepFRI Deep functional residue identification Citing @article {Gligorijevic2019, author = {Gligorijevic, Vladimir and Renfrew, P. Douglas and Koscio

Flatiron Institute 156 Dec 25, 2022
[Machine Learning Engineer Basic Guide] 부스트캠프 AI Tech - Product Serving 자료

Boostcamp-AI-Tech-Product-Serving 부스트캠프 AI Tech - Product Serving 자료 Repository 구조 part1(MLOps 개론, Model Serving, 머신러닝 프로젝트 라이프 사이클은 별도의 코드가 없으며, part

Sung Yun Byeon 269 Dec 21, 2022
Tutorial to set up TensorFlow Object Detection API on the Raspberry Pi

A tutorial showing how to set up TensorFlow's Object Detection API on the Raspberry Pi

Evan 1.1k Dec 26, 2022
Unofficial PyTorch Implementation of AHDRNet (CVPR 2019)

AHDRNet-PyTorch This is the PyTorch implementation of Attention-guided Network for Ghost-free High Dynamic Range Imaging (CVPR 2019). The official cod

Yutong Zhang 4 Sep 08, 2022
Trajectory Prediction with Graph-based Dual-scale Context Fusion

DSP: Trajectory Prediction with Graph-based Dual-scale Context Fusion Introduction This is the project page of the paper Lu Zhang, Peiliang Li, Jing C

HKUST Aerial Robotics Group 103 Jan 04, 2023
This is an unofficial implementation of the paper “Student-Teacher Feature Pyramid Matching for Unsupervised Anomaly Detection”.

This is an unofficial implementation of the paper “Student-Teacher Feature Pyramid Matching for Unsupervised Anomaly Detection”.

haifeng xia 32 Oct 26, 2022
Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)

Distributed Deep Learning in Open Collaborations This repository contains the code for the NeurIPS 2021 paper "Distributed Deep Learning in Open Colla

Yandex Research 96 Sep 15, 2022
Implements MLP-Mixer: An all-MLP Architecture for Vision.

MLP-Mixer-CIFAR10 This repository implements MLP-Mixer as proposed in MLP-Mixer: An all-MLP Architecture for Vision. The paper introduces an all MLP (

Sayak Paul 51 Jan 04, 2023
[内测中]前向式Python环境快捷封装工具,快速将Python打包为EXE并添加CUDA、NoAVX等支持。

QPT - Quick packaging tool 快捷封装工具 GitHub主页 | Gitee主页 QPT是一款可以“模拟”开发环境的多功能封装工具,最短只需一行命令即可将普通的Python脚本打包成EXE可执行程序,并选择性添加CUDA和NoAVX的支持,尽可能兼容更多的用户环境。 感觉还可

QPT Family 545 Dec 28, 2022
QR2Pass-project - A proof of concept for an alternative (passwordless) authentication system to a web server

QR2Pass This is a proof of concept for an alternative (passwordless) authenticat

4 Dec 09, 2022
The-Secret-Sharing-Schemes - This interactive script demonstrates the Secret Sharing Schemes algorithm

The-Secret-Sharing-Schemes This interactive script demonstrates the Secret Shari

Nishaant Goswamy 1 Jan 02, 2022
The code for paper Efficiently Solve the Max-cut Problem via a Quantum Qubit Rotation Algorithm

Quantum Qubit Rotation Algorithm Single qubit rotation gates $$ U(\Theta)=\bigotimes_{i=1}^n R_x (\phi_i) $$ QQRA for the max-cut problem This code wa

SheffieldWang 0 Oct 18, 2021
Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020

XDVioDet Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020. The proj

peng 64 Dec 12, 2022
Differentiable Simulation of Soft Multi-body Systems

Differentiable Simulation of Soft Multi-body Systems Yi-Ling Qiao, Junbang Liang, Vladlen Koltun, Ming C. Lin [Paper] [Code] Updates The C++ backend s

YilingQiao 26 Dec 23, 2022