IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID,

Related tags

Deep LearningIDM
Overview

Python >=3.7 PyTorch >=1.1

Intermediate Domain Module (IDM)

This repository is the official implementation for IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID, which is accepted by ICCV 2021 (Oral).

IDM achieves state-of-the-art performances on the unsupervised domain adaptation task for person re-ID.

Requirements

Installation

git clone https://github.com/SikaStar/IDM.git
cd IDM/idm/evaluation_metrics/rank_cylib && make all

Prepare Datasets

cd examples && mkdir data

Download the person re-ID datasets Market-1501, DukeMTMC-ReID, MSMT17, PersonX, and UnrealPerson. Then unzip them under the directory like

IDM/examples/data
├── dukemtmc
│   └── DukeMTMC-reID
├── market1501
│   └── Market-1501-v15.09.15
├── msmt17
│   └── MSMT17_V1
├── personx
│   └── PersonX
└── unreal
    ├── list_unreal_train.txt
    └── unreal_vX.Y

Prepare ImageNet Pre-trained Models for IBN-Net

When training with the backbone of IBN-ResNet, you need to download the ImageNet-pretrained model from this link and save it under the path of logs/pretrained/.

mkdir logs && cd logs
mkdir pretrained

The file tree should be

IDM/logs
└── pretrained
    └── resnet50_ibn_a.pth.tar

ImageNet-pretrained models for ResNet-50 will be automatically downloaded in the python script.

Training

We utilize 4 GTX-2080TI GPUs for training. Note that

  • The source and target domains are trained jointly.
  • For baseline methods, use -a resnet50 for the backbone of ResNet-50, and -a resnet_ibn50a for the backbone of IBN-ResNet.
  • For IDM, use -a resnet50_idm to insert IDM into the backbone of ResNet-50, and -a resnet_ibn50a_idm to insert IDM into the backbone of IBN-ResNet.
  • For strong baseline, use --use-xbm to implement XBM (a variant of Memory Bank).

Baseline Methods

To train the baseline methods in the paper, run commands like:

# Naive Baseline
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_naive_baseline.sh ${source} ${target} ${arch}

# Strong Baseline
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_strong_baseline.sh ${source} ${target} ${arch}

Some examples:

### market1501 -> dukemtmc ###

# ResNet-50
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_strong_baseline.sh market1501 dukemtmc resnet50 

# IBN-ResNet-50
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_strong_baseline.sh market1501 dukemtmc resnet_ibn50a

Training with IDM

To train the models with our IDM, run commands like:

# Naive Baseline + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm.sh ${source} ${target} ${arch} ${stage} ${mu1} ${mu2} ${mu3}

# Strong Baseline + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm_xbm.sh ${source} ${target} ${arch} ${stage} ${mu1} ${mu2} ${mu3}
  • Defaults: --stage 0 --mu1 0.7 --mu2 0.1 --mu3 1.0

Some examples:

### market1501 -> dukemtmc ###

# ResNet-50 + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm_xbm.sh market1501 dukemtmc resnet50_idm 0 0.7 0.1 1.0 

# IBN-ResNet-50 + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm_xbm.sh market1501 dukemtmc resnet_ibn50a_idm 0 0.7 0.1 1.0

Evaluation

We utilize 1 GTX-2080TI GPU for testing. Note that

  • use --dsbn for domain adaptive models, and add --test-source if you want to test on the source domain;
  • use -a resnet50 for the backbone of ResNet-50, and -a resnet_ibn50a for the backbone of IBN-ResNet.
  • use -a resnet50_idm for ResNet-50 + IDM, and -a resnet_ibn50a_idm for IBN-ResNet + IDM.

To evaluate the baseline model on the target-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn -d ${dataset} -a ${arch} --resume ${resume} 

To evaluate the baseline model on the source-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn --test-source -d ${dataset} -a ${arch} --resume ${resume} 

To evaluate the IDM model on the target-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn-idm -d ${dataset} -a ${arch} --resume ${resume} --stage ${stage} 

To evaluate the IDM model on the source-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn-idm --test-source -d ${dataset} -a ${arch} --resume ${resume} --stage ${stage} 

Some examples:

### market1501 -> dukemtmc ###

# evaluate the target domain "dukemtmc" on the strong baseline model
CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn  -d dukemtmc -a resnet50 \
--resume logs/resnet50_strong_baseline/market1501-TO-dukemtmc/model_best.pth.tar 

# evaluate the source domain "market1501" on the strong baseline model
CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn --test-source  -d market1501 -a resnet50 \
--resume logs/resnet50_strong_baseline/market1501-TO-dukemtmc/model_best.pth.tar 

# evaluate the target domain "dukemtmc" on the IDM model (after stage-0)
python3 examples/test.py --dsbn-idm  -d dukemtmc -a resnet50_idm \
--resume logs/resnet50_idm_xbm/market1501-TO-dukemtmc/model_best.pth.tar --stage 0

# evaluate the target domain "dukemtmc" on the IDM model (after stage-0)
python3 examples/test.py --dsbn-idm --test-source  -d market1501 -a resnet50_idm \
--resume logs/resnet50_idm_xbm/market1501-TO-dukemtmc/model_best.pth.tar --stage 0

Acknowledgement

Our code is based on MMT and SpCL. Thanks for Yixiao's wonderful works.

Citation

If you find our work is useful for your research, please kindly cite our paper

@inproceedings{dai2021idm,
  title={IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID},
  author={Dai, Yongxing and Liu, Jun and Sun, Yifan and Tong, Zekun and Zhang, Chi and Duan, Ling-Yu},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2021}
}

If you have any questions, please leave an issue or contact me: [email protected]

Owner
Yongxing Dai
I am now a fourth-year PhD student at National Engineering Lab for Video Technology in Peking University, Beijing, China
Yongxing Dai
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
This is the official code for the paper "Ad2Attack: Adaptive Adversarial Attack for Real-Time UAV Tracking".

Ad^2Attack:Adaptive Adversarial Attack on Real-Time UAV Tracking Demo video 📹 Our video on bilibili demonstrates the test results of Ad^2Attack on se

Intelligent Vision for Robotics in Complex Environment 10 Nov 07, 2022
When in Doubt: Improving Classification Performance with Alternating Normalization

When in Doubt: Improving Classification Performance with Alternating Normalization Findings of EMNLP 2021 Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoa

Menglin Jia 13 Nov 06, 2022
OpenMMLab Text Detection, Recognition and Understanding Toolbox

Introduction English | 简体中文 MMOCR is an open-source toolbox based on PyTorch and mmdetection for text detection, text recognition, and the correspondi

OpenMMLab 3k Jan 07, 2023
Deep learning models for classification of 15 common weeds in the southern U.S. cotton production systems.

CottonWeeds Deep learning models for classification of 15 common weeds in the southern U.S. cotton production systems. requirements pytorch torchsumma

Dong Chen 8 Jun 07, 2022
PyTorch-Multi-Style-Transfer - Neural Style and MSG-Net

PyTorch-Style-Transfer This repo provides PyTorch Implementation of MSG-Net (ours) and Neural Style (Gatys et al. CVPR 2016), which has been included

Hang Zhang 906 Jan 04, 2023
Code of PVTv2 is released! PVTv2 largely improves PVTv1 and works better than Swin Transformer with ImageNet-1K pre-training.

Updates (2020/06/21) Code of PVTv2 is released! PVTv2 largely improves PVTv1 and works better than Swin Transformer with ImageNet-1K pre-training. Pyr

1.3k Jan 04, 2023
Cookiecutter PyTorch Lightning

Cookiecutter PyTorch Lightning Instructions # install cookiecutter pip install cookiecutter

Mazen 8 Nov 06, 2022
Unpaired Caricature Generation with Multiple Exaggerations

CariMe-pytorch The official pytorch implementation of the paper "CariMe: Unpaired Caricature Generation with Multiple Exaggerations" CariMe: Unpaired

Gu Zheng 37 Dec 30, 2022
Material for my PyConDE & PyData Berlin 2022 Talk "5 Steps to Speed Up Your Data-Analysis on a Single Core"

5 Steps to Speed Up Your Data-Analysis on a Single Core Material for my talk at the PyConDE & PyData Berlin 2022 Description Your data analysis pipeli

Jonathan Striebel 9 Dec 12, 2022
Code for Deep Single-image Portrait Image Relighting

Deep Single-Image Portrait Relighting [Project Page] Hao Zhou, Sunil Hadap, Kalyan Sunkavalli, David W. Jacobs. In ICCV, 2019 Overview Test script for

438 Jan 05, 2023
This git repo contains the implementation of my ML project on Heart Disease Prediction

Introduction This git repo contains the implementation of my ML project on Heart Disease Prediction. This is a real-world machine learning model/proje

Aryan Dutta 1 Feb 02, 2022
A TikTok-like recommender system for GitHub repositories based on Gorse

GitRec GitRec is the missing recommender system for GitHub repositories based on Gorse. Architecture The trending crawler crawls trending repositories

337 Jan 04, 2023
Code for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss"

PurNet Project for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss" Abstract Image-based salie

Jinming Su 4 Aug 25, 2022
This is an official implementation of the High-Resolution Transformer for Dense Prediction.

High-Resolution Transformer for Dense Prediction Introduction This is the official implementation of High-Resolution Transformer (HRT). We present a H

HRNet 403 Dec 13, 2022
Pytorch reimplement of the paper "A Novel Cascade Binary Tagging Framework for Relational Triple Extraction" ACL2020. The original code is written in keras.

CasRel-pytorch-reimplement Pytorch reimplement of the paper "A Novel Cascade Binary Tagging Framework for Relational Triple Extraction" ACL2020. The o

longlongman 170 Dec 01, 2022
End-to-end beat and downbeat tracking in the time domain.

WaveBeat End-to-end beat and downbeat tracking in the time domain. | Paper | Code | Video | Slides | Setup First clone the repo. git clone https://git

Christian J. Steinmetz 60 Dec 24, 2022
TJU Deep Learning & Neural Network

Deep_Learning & Neural_Network_Lab 实验环境 Python 3.9 Anaconda3(官网下载或清华镜像都行) PyTorch 1.10.1(安装代码如下) conda install pytorch torchvision torchaudio cudatool

St3ve Lee 1 Jan 19, 2022
CS_Final_Metal_surface_detection - This is a final project for CoderSchool Machine Learning bootcamp on 29/12/2021.

CS_Final_Metal_surface_detection This is a final project for CoderSchool Machine Learning bootcamp on 29/12/2021. The project is based on the dataset

Cuong Vo 1 Dec 29, 2021
DTCN IJCAI - Sequential prediction learning framework and algorithm

DTCN This is the implementation of our paper "Sequential Prediction of Social Me

Bobby 2 Jan 24, 2022