Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

Related tags

Deep LearningVANET
Overview

VANET

Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

Introduction

This is the implementation of article VANet "Vehicle Re-identification with Viewpoint-aware Metric Learning", which support both single-branch training and two branch training.

Implementation details

The whole implementation is based on PVEN project(https://github.com/silverbulletmdc/PVEN). The key code block added and modified are mainly distributed as follows:

For network construction:
    This project provide two version of backbone, namely 'googlenet' and 'resnet50' respectively. There the corresponding configuration files 
    as well as other corresponding code interfence are all provided completely.
    code location: vehicle_reid_pytorch/models/vanet.py

For training:
    This project provide two mode of training, namely 'single branch(baseline of VANet)' and 'two branch(VANet)' respectively
    code location: examples/parsing_reid/main_vanet_single_branch.py
    code location: examples/parsing_reid/main_vanet_two_branch.py

Configuration files:
    code location: examples/parsing_reid/configs/veri776_b64_baseline_vanet_single_branch_resnet.yml
    code location: examples/parsing_reid/configs/veri776_b64_baseline_vanet_two_branch_resnet.yml
    code location: examples/parsing_reid/configs/veri776_b64_baseline_vanet_two_branch_googlenet.yml

For loss calculation:
    code location: vehicle_reid_pytorch/loss/triplet_loss.py

For evaluation:
    mAP, cmc, ..., hist distribution figure drawing function are included.
    code location: examples/parsing_reid/math_tools.py

Results comparasion

We have achieved the following preformance by using the method this paper 'VANET' provided.

     -------------------------- -----------------------------------
                  |    mAP    |   rank-1  |   rank-5  |  rank-10  |
     --------------------------------- ----------------------------
      VANET+BOT   |   80.1%   |   96.5    |   98.5    |    99.4   | 
     --------------------------------------------------------------
      BOT(ours)   |   77.8%   |   95.3    |   97.8    |    98.8   |
     --------------------------------------------------------------
      BOT[1]      |   78.2%   |   95.5    |   97.9    |      *    |
     --------------------------------------------------------------

Note: The 'BOT', which means "bag of tricks" proposed by paper[2]. With respect to the two branch implementation of the above "VANET+BOT", we adopted the first 6 layers of the official resnet50 as the shared_conv network, the remaining two layers as the branch_conv network.There are also instructions in the corresponding code when you use.

Also, four type data's(similar-view_same-id, similar-view_different-id, different-view_different-id, different-view_same-id) distribution are drawn based on paper's aspect. note: this visualization code can be founded at examples/parsing_reid/math_tools.py

1. Get started

All the results are tested on VeRi-776 dstasets. Please reference to the environment implementation of other general reid projects, this project reference to fast-reid's.

2. Training

Reference to folder run_sh/run_main_XXX.sh Note: If you want to use your own dataset for training, remember to keep your data's structure be consistent with the veri776 dataloader's output in this project, reference to realted code for more details.

Example:

  sh ./run_sh/run_main_vanet_two_branch_resnet.sh

3. evaluation

Reference to folder run_sh/run_eval_XXX.sh Note: We have add 'drawing hist graph' function in evaluated stage, if you needn't this statistic operation temporarily, remember to shut down this function, for the operation is to some extent time-consuming, detail code block are located in examples/parsing_reid/math_tools.py.

Example:

  sh ./run_sh/run_eval_two_branch_resnet.sh

reference

[1] Khorramshahi, Pirazh, et al. "The devil is in the details: Self-supervised attention for vehicle re-identification." European Conference on Computer Vision. Springer, Cham, 2020.

[2] Luo, Hao, et al. "Bag of tricks and a strong baseline for deep person re-identification." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2019.

Contact

For any question, please file an issue or contact

Shichao Liu (Shanghai Em-Data Technology Co., Ltd.) [email protected]
Owner
EMDATA-AILAB
EMDATA-AILAB
Diverse Image Captioning with Context-Object Split Latent Spaces (NeurIPS 2020)

Diverse Image Captioning with Context-Object Split Latent Spaces This repository is the PyTorch implementation of the paper: Diverse Image Captioning

Visual Inference Lab @TU Darmstadt 34 Nov 21, 2022
SHIFT15M: multiobjective large-scale fashion dataset with distributional shifts

[arXiv] The main motivation of the SHIFT15M project is to provide a dataset that contains natural dataset shifts collected from a web service IQON, wh

ZOZO, Inc. 138 Nov 24, 2022
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
learning and feeling SLAM together with hands-on-experiments

modern-slam-tutorial-python Learning and feeling SLAM together with hands-on-experiments 😀 😃 😆 Dependencies Most of the examples are based on GTSAM

Giseop Kim 59 Dec 22, 2022
A Pytorch implementation of CVPR 2021 paper "RSG: A Simple but Effective Module for Learning Imbalanced Datasets"

RSG: A Simple but Effective Module for Learning Imbalanced Datasets (CVPR 2021) A Pytorch implementation of our CVPR 2021 paper "RSG: A Simple but Eff

120 Dec 12, 2022
NitroFE is a Python feature engineering engine which provides a variety of modules designed to internally save past dependent values for providing continuous calculation.

NitroFE is a Python feature engineering engine which provides a variety of modules designed to internally save past dependent values for providing continuous calculation.

100 Sep 28, 2022
competitions-v2

Codabench (formerly Codalab Competitions v2) Installation $ cp .env_sample .env $ docker-compose up -d $ docker-compose exec django ./manage.py migrat

CodaLab 21 Dec 02, 2022
RL algorithm PPO and IRL algorithm AIRL written with Tensorflow.

RL algorithm PPO and IRL algorithm AIRL written with Tensorflow. They have a parallel sampling feature in order to increase computation speed (especially in high-performance computing (HPC)).

Fangjian Li 3 Dec 28, 2021
This is an implementation for the CVPR2020 paper "Learning Invariant Representation for Unsupervised Image Restoration"

Learning Invariant Representation for Unsupervised Image Restoration (CVPR 2020) Introduction This is an implementation for the paper "Learning Invari

GarField 88 Nov 07, 2022
Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training

Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training Code for our paper "Predicting lncRNA–protein interactio

zhanglabNKU 1 Nov 29, 2022
MTCNN face detection implementation for TensorFlow, as a PIP package.

MTCNN Implementation of the MTCNN face detector for Keras in Python3.4+. It is written from scratch, using as a reference the implementation of MTCNN

Iván de Paz Centeno 1.9k Dec 30, 2022
Crawl & visualize ICLR papers and reviews

Crawl and Visualize ICLR 2022 OpenReview Data Descriptions This Jupyter Notebook contains the data crawled from ICLR 2022 OpenReview webpages and thei

Federico Berto 75 Dec 05, 2022
Learning Domain Invariant Representations in Goal-conditioned Block MDPs

Learning Domain Invariant Representations in Goal-conditioned Block MDPs Beining Han, Chongyi Zheng, Harris Chan, Keiran Paster, Michael R. Zhang, Jim

Chongyi Zheng 3 Apr 12, 2022
Implementation of Fast Transformer in Pytorch

Fast Transformer - Pytorch Implementation of Fast Transformer in Pytorch. This only work as an encoder. Yannic video AI Epiphany Install $ pip install

Phil Wang 167 Dec 27, 2022
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
A simple but complete full-attention transformer with a set of promising experimental features from various papers

x-transformers A concise but fully-featured transformer, complete with a set of promising experimental features from various papers. Install $ pip ins

Phil Wang 2.3k Jan 03, 2023
modelvshuman is a Python library to benchmark the gap between human and machine vision

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with

Bethge Lab 244 Jan 03, 2023
RMNA: A Neighbor Aggregation-Based Knowledge Graph Representation Learning Model Using Rule Mining

RMNA: A Neighbor Aggregation-Based Knowledge Graph Representation Learning Model Using Rule Mining Our code is based on Learning Attention-based Embed

宋朝都 4 Aug 07, 2022
[NeurIPS 2019] Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss

Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, Tengyu Ma This is the offi

Kaidi Cao 528 Jan 01, 2023
WaveFake: A Data Set to Facilitate Audio DeepFake Detection

WaveFake: A Data Set to Facilitate Audio DeepFake Detection This is the code repository for our NeurIPS 2021 (Track on Datasets and Benchmarks) paper

Chair for Sys­tems Se­cu­ri­ty 27 Dec 22, 2022