Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting

Overview

QAConv

Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting

This PyTorch code is proposed in our paper [1]. A Chinese blog is available in 再见,迁移学习?可解释和泛化的行人再辨识.

Updates

  • 9/19/2021: Include TransMatcher, a transformer based deep image matching method based on QAConv 2.0.
  • 9/16/2021: QAConv 2.1: simplify graph sampling, implement the Einstein summation for QAConv, use the batch hard triplet loss, design an adaptive epoch and learning rate scheduling method, and apply the automatic mixed precision training.
  • 4/1/2021: QAConv 2.0 [2]: include a new sampler called Graph Sampler (GS), and remove the class memory. This version is much more efficient in learning. See the updated results.
  • 3/31/2021: QAConv 1.2: include some popular data augmentation methods, and change the ranking.py implementation to the original open-reid version, so that it is more consistent to most other implementations (e.g. open-reid, torch-reid, fast-reid).
  • 2/7/2021: QAConv 1.1: an important update, which includes a pre-training function for a better initialization, so that the results are now more stable.
  • 11/26/2020: Include the IBN-Net as backbone, and the RandPerson dataset.

Requirements

  • Pytorch (>1.0)
  • sklearn
  • scipy

Usage

Download some public datasets (e.g. Market-1501, CUHK03-NP, MSMT) on your own, extract them in some folder, and then run the followings.

Training and test

python main.py --dataset market --testset cuhk03_np_detected[,msmt] [--data-dir ./data] [--exp-dir ./Exp]

For more options, run "python main.py --help". For example, if you want to use the ResNet-152 as backbone, specify "-a resnet152". If you want to train on the whole dataset (as done in our paper for the MSMT17), specify "--combine_all".

With the GS sampler and pairwise matching loss, run the following:

python main_gs.py --dataset market --testset cuhk03_np_detected[,msmt] [--data-dir ./data] [--exp-dir ./Exp]

Test only

python main.py --dataset market --testset duke[,market,msmt] [--data-dir ./data] [--exp-dir ./Exp] --evaluate

Performance

Performance (%) of QAConv 2.1 under direct cross-dataset evaluation without transfer learning or domain adaptation:

Training Data Version Training Hours CUHK03-NP Market-1501 MSMT17
Rank-1 mAP Rank-1 mAP Rank-1 mAP
Market QAConv 1.0 1.33 9.9 8.6 - - 22.6 7.0
QAConv 2.1 0.25 19.1 18.1 - - 45.9 17.2
MSMT QAConv 2.1 0.73 20.9 20.6 79.1 49.5 - -
MSMT (all) QAConv 1.0 26.90 25.3 22.6 72.6 43.1 - -
QAConv 2.1 3.42 27.6 28.0 82.4 56.9 - -
RandPerson QAConv 2.1 2.33 17.9 16.1 75.9 46.3 44.1 15.2

Contacts

Shengcai Liao
Inception Institute of Artificial Intelligence (IIAI)
[email protected]

Citation

[1] Shengcai Liao and Ling Shao, "Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting." In the 16th European Conference on Computer Vision (ECCV), 23-28 August, 2020.

[2] Shengcai Liao and Ling Shao, "Graph Sampling Based Deep Metric Learning for Generalizable Person Re-Identification." In arXiv preprint, arXiv:2104.01546, 2021.

@inproceedings{Liao-ECCV2020-QAConv,  
  title={{Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting}},  
  author={Shengcai Liao and Ling Shao},  
  booktitle={European Conference on Computer Vision (ECCV)},  
  year={2020}  
}

@article{Liao-arXiv2021-GS,
  author    = {Shengcai Liao and Ling Shao},
  title     = {{Graph Sampling Based Deep Metric Learning for Generalizable Person Re-Identification}},
  journal   = {CoRR},
  volume    = {abs/2104.01546},
  year      = {April 4, 2021},
  url       = {http://arxiv.org/abs/2104.01546},
  archivePrefix = {arXiv},
  eprint    = {2104.01546}
}
Comments
  • Out of memory,--test_fea_batch --test_gal_batch --test_prob_batch all had seted to 128

    Out of memory,--test_fea_batch --test_gal_batch --test_prob_batch all had seted to 128

    main.py --dataset market --testset msmt --data-dir ./reid/datasets/ --exp-dir ./Exp

    fpaths:./reid/datasets/market/bounding_box_train/1500_c6s3_086567_01.jpg fpaths:./reid/datasets/market/bounding_box_test/1501_c6s4_001902_01.jpg fpaths:./reid/datasets/market/query/1501_c6s4_001877_00.jpg Market dataset loaded subset | # ids | # images

    train | 751 | 12935 query | 750 | 3367 gallery | 751 | 15912

    • Finished epoch 1 at lr=[0.0005, 0.005, 0.005]. Loss: 14.812. Acc: 54.97%. Training time: 174 seconds.

    • Finished epoch 2 at lr=[0.0005, 0.005, 0.005]. Loss: 13.333. Acc: 61.35%. Training time: 344 seconds.

    • Finished epoch 3 at lr=[0.0005, 0.005, 0.005]. Loss: 11.447. Acc: 68.55%. Training time: 514 seconds.

    • Finished epoch 4 at lr=[0.0005, 0.005, 0.005]. Loss: 10.338. Acc: 72.09%. Training time: 684 seconds.

    • Finished epoch 5 at lr=[0.0005, 0.005, 0.005]. Loss: 9.319. Acc: 75.31%. Training time: 855 seconds.

    Decay the learning rate by a factor of 0.1. Final epochs: 7.

    • Finished epoch 6 at lr=[5e-05, 0.0005, 0.0005]. Loss: 8.566. Acc: 77.75%. Training time: 1025 seconds.

    • Finished epoch 7 at lr=[5e-05, 0.0005, 0.0005]. Loss: 7.732. Acc: 80.22%. Training time: 1195 seconds.

    The learning converges at epoch 7.

    Evaluate the learned model: test_names: ['msmt'] MSMT dataset loaded subset | # ids | # images

    train | 1041 | 32621 query | 3060 | 11659 gallery | 3060 | 82161 /home/luotao/anaconda3/envs/QAConv/lib/python3.6/site-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. "Argument interpolation should be of type InterpolationMode instead of int. " Time: 2690.337 seconds. / 1284. similarity 1 / 1284.
    已杀死

    run: python main.py --dataset market --testset msmt --data-dir ./reid/datasets/ --exp-dir ./Exp
    --test_fea_batch --test_gal_batch --test_prob_batch all set to 128. Time: xx seconds, /xx similarity xx/xx. 已杀死.
    Those three parameters set to 64, the errer : Time: 2690.337 seconds. / 1284. similarity 1 / 1284. 已杀死.

    opened by huangpan2507 16
  • Unstable results

    Unstable results

    Hi, Thanks for sharing your code. However, I ran your code twice and get quite different results. Maybe due to random seed? So did you set a fixed random seed when you train the model?

    good first issue 
    opened by HeliosZhao 10
  • 训练非常慢!

    训练非常慢!

    你好,我使用2块2080TI训练30W数据,64 batch_size 并且使用了fp16来加速训练,但是一个epoch训练了半个多小时才到511 iter,这正常吗? Epoch: [1][511/4714] 455Time 2.620 (2.646)ec 0.0Data 0.001 (0.002) Loss 456.984 (520.544) Prec 0.00% (0.00%)

    opened by zengwb-lx 6
  • Unable to use ClassMemoryLoss to train the model

    Unable to use ClassMemoryLoss to train the model

    In the QAConv codes, I tried to modify the loss function to ClassMemoryLoss as the criterion but the acc is nearly zero. Is the ClassMemoryLoss available to use? Are ClassMemoryLoss and Focal Loss in the paper the same? The code is shown below.

    criterion = ClassMemoryLoss(matcher, num_classes, num_features, hei, wid).cuda()

    opened by ArminLee 4
  • Question about backbone

    Question about backbone

    Hi Mr.Liao, I appreciate much your novel idea and your code, and i notice that you choose ResNet as the backbone. ResNet152 has shown great results in the paper and in my own experiments , but it seems that it takes quite some time to train, even if we choose layer 3 of the model. Have you tried some lightweight backbone such as MobileNet? Is there any specific reason for choosing ResNet as feature extractor? Thanks in advance.

    opened by jingyut 4
  • graph sampling的疑问

    graph sampling的疑问

    廖老师您好,想了解一下为什么graph sampling对于domain generalized re-id能够有很好的提升效果?以往的domain generalized re-id方法往往是采用domain invariant learning, style normalization等方式来解决这一任务,但graph sampling好像跟以往的方法思路不同,是通过加强hard mining的方式来改善domain generalization;对这一点有些不太理解,期待您的回复,谢谢!

    opened by Terminator8758 3
  • Graph Sampler

    Graph Sampler

    thank u for ur work! I got 2 questions about Graph Sampling:

    1. intuitively, it should work on normal ReID task.
    2. The whole process is like: before training one epoch, the proposed sampler randomly select one img for each class, then computes a distmat for each img. The distmat represents distances between classes. So we can mine hardest samples in entire dataset, not a batch. But I didn't get where does "Graph" have connection to the process above. Looking forward to your help
    opened by liyuke65535 3
  • 关于s=1

    关于s=1

    廖老师您好,我想问一下关于s的取值问题。 您论文提到为了效率选择了s=1, 我是这么理解的,在不使用classmemory 而是使用pair wise match的情况下, 做一次QAconv的时间复杂度为O( B^2 * (HW)^2 * s )。 按照时间复杂度来的化, s取值的大一点或者小小一点感觉没有多影响。 但是,当s=1的时候,可以直接使用矩阵乘法,然后又因为矩阵乘法做了大量的优化,所以实际的时间大大缩短了。所以最终s=1. 不知我的理解是否有问题,望老师您赐教!

    opened by pSGAme 2
  • The Graph Sampling work 相关问题

    The Graph Sampling work 相关问题

    廖老师您好,读了您最近的The Graph Sampling work 论文,有两个问题不太清楚,想请教您一下,望您指点:

    1. 在每一个epoch建图的时候,随机采样每个类的一张图片会不会造成比较大的偏差?
    2. K=2处理梯度太小的问题时,会不会遇到完全采样不到的情况(以前在远大于学术数据集的业务数据集上遇到过着种问题,hardcase采样不到)
    opened by zhustrong 2
  • Issues about evaluators.py

    Issues about evaluators.py

    I use Market as the training dataset and Duke as the test dataset, when I use --do_tlift, it shows that the size of tensor are not match. image

    In the evaluators.py document, line 212, the original dist size is 222817661 in Market dataset, and the size of dist_rerank is 2228253 because the num_gal is not the same. The value of num_gal is the length of gallery images in the definition of line 189. However, it is redefined in line 204 as the size of gallery feature.

    opened by ArminLee 2
  • self.model.eval()

    self.model.eval()

    Recently, I have read your code for QAConv. Now, I have a question to consult you. In the train() method in trainer.py, the following codes class BaseTrainer(object): for i, inputs in enumerate(data_loader): self.model.eval() self.criterion.train() Why you don't set the model in train mode by using self.mode.train(), instead of using model.eval(). And, in the whole code of your project, I also found that there is no other place to use model. train().

    opened by xiaopanchen 2
  • Can't find qaconv_loss

    Can't find qaconv_loss

    Hello,

    First of all, thanks so much for your good work!

    Here is a question: inside the test_matching.py, you import from reid.loss.qaconv_loss import QAConvLoss, however, it seems that qaconv_loss is no longer here, so I change to other loss functions. Will it influence the performance?

    Thanks!

    opened by xyimaging 5
Releases(v2.1)
  • v2.1(Sep 16, 2021)

    • Simplified graph sampling
    • Einstein summation for QAConv
    • Hard triplet loss
    • Adaptive epoch and learning rate scheduling
    • Automatic mixed precision training
    Source code(tar.gz)
    Source code(zip)
  • v2.0(Apr 1, 2021)

    • Include a new sampler called Graph Sampler (GS).
    • Remove the class memory based loss. Instead, a pairwise matching loss is implemented.
    • This version is much more efficient in learning.
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Mar 31, 2021)

    Include some popular data augmentation methods, and change the ranking.py implementation to the original open-reid version, so that it is more consistent to most other implementations (e.g. open-reid, torch-reid, fast-reid).

    Source code(tar.gz)
    Source code(zip)
  • v1.1(Mar 30, 2021)

    • Include the IBN-Net as backbone, and the RandPerson dataset.
    • Include a pre-training function for a better initialization, so that the results are now more stable.
    Source code(tar.gz)
    Source code(zip)
  • v1.0-eccv(Aug 12, 2020)

Owner
Shengcai Liao
Lead Scientist, Ph.D. Inception Institute of Artificial Intelligence
Shengcai Liao
PyTorch implementation of CloudWalk's recent work DenseBody

densebody_pytorch PyTorch implementation of CloudWalk's recent paper DenseBody. Note: For most recent updates, please check out the dev branch. Update

Lingbo Yang 401 Nov 19, 2022
Benchmark for the generalization of 3D machine learning models across different remeshing/samplings of a surface.

Discretization Robust Correspondence Benchmark One challenge of machine learning on 3D surfaces is that there are many different representations/sampl

Nicholas Sharp 10 Sep 30, 2022
Supervised & unsupervised machine-learning techniques are applied to the database of weighted P4s which admit Calabi-Yau hypersurfaces.

Weighted Projective Spaces ML Description: The database of 5-vectors describing 4d weighted projective spaces which admit Calabi-Yau hypersurfaces are

Ed Hirst 3 Sep 08, 2022
ICCV2021 - A New Journey from SDRTV to HDRTV.

ICCV2021 - A New Journey from SDRTV to HDRTV.

XyChen 82 Dec 27, 2022
On the Analysis of French Phonetic Idiosyncrasies for Accent Recognition

On the Analysis of French Phonetic Idiosyncrasies for Accent Recognition With the spirit of reproducible research, this repository contains codes requ

0 Feb 24, 2022
This is a TensorFlow implementation for C2-Rec

This is a TensorFlow implementation for C2-Rec We refer to the repo SASRec. Requirements requirement.txt Datasets This repo includes Amazon Beauty dat

7 Nov 14, 2022
AutoVideo: An Automated Video Action Recognition System

AutoVideo is a system for automated video analysis. It is developed based on D3M infrastructure, which describes machine learning with generic pipeline languages. Currently, it focuses on video actio

Data Analytics Lab at Texas A&M University 267 Dec 17, 2022
CLIPort: What and Where Pathways for Robotic Manipulation

CLIPort CLIPort: What and Where Pathways for Robotic Manipulation Mohit Shridhar, Lucas Manuelli, Dieter Fox CoRL 2021 CLIPort is an end-to-end imitat

246 Dec 11, 2022
An investigation project for SISR.

SISR-Survey An investigation project for SISR. This repository is an official project of the paper "From Beginner to Master: A Survey for Deep Learnin

Juncheng Li 79 Oct 20, 2022
An open source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+. Including offline map and navigation.

Pi Zero Bikecomputer An open-source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+ https://github.com/hishizuka/pizero_bikecompute

hishizuka 264 Jan 02, 2023
The implement of papar "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization"

SIGIR2021-EGLN The implement of paper "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization" Neural graph based Col

15 Dec 27, 2022
A supplementary code for Editable Neural Networks, an ICLR 2020 submission.

Editable neural networks A supplementary code for Editable Neural Networks, an ICLR 2020 submission by Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitry Py

Anton Sinitsin 32 Nov 29, 2022
ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

AliceMind AliceMind: ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab This repository provides pre-trained encode

Alibaba 1.4k Jan 01, 2023
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network.

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

111 Dec 27, 2022
Code for Multinomial Diffusion

Code for Multinomial Diffusion Abstract Generative flows and diffusion models have been predominantly trained on ordinal data, for example natural ima

104 Jan 04, 2023
Fast convergence of detr with spatially modulated co-attention

Fast convergence of detr with spatially modulated co-attention Usage There are no extra compiled components in SMCA DETR and package dependencies are

peng gao 135 Dec 07, 2022
Alignment Attention Fusion framework for Few-Shot Object Detection

AAF framework Framework generalities This repository contains the code of the AAF framework proposed in this paper. The main idea behind this work is

Pierre Le Jeune 20 Dec 16, 2022
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch

Mixup: Beyond Empirical Risk Minimization in PyTorch This is an unofficial PyTorch implementation of mixup: Beyond Empirical Risk Minimization. The co

Harry Yang 121 Dec 17, 2022
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability PCACE is a new algorithm for ranking neurons in a CNN architecture in order

4 Jan 04, 2022