Retrieval.pytorch - The code we used in [2020 DIGIX]

Overview

retrieval.pytorch

dependence

  • python3
  • pytorch
  • numpy
  • scikit-learn
  • tqdm
  • yacs

You can install yacs by pip. Other dependencies can be installed by 'conda'.

prepare dataset

first, you need to download the dataset from here. Then, you can move them into the directory $DATASET and decompress them by

unzip train_data.zip
unzip test_data_A.zip
unzip test_data_B.zip

Then remove the empty directory in train_data:

cd train_data
rm -rf DIGIX_001453
rm -rf DIGIX_001639
rm -rf DIGIX_002284

Finally, you need to edit the file src/dataset/datasets.py and set the correct values for traindir, test_A_dir, test_B_dir.

traindir = '$DATASET/train_data'
test_A_dir = '$DATASET/test_data_A'
test_B_dir = '$DATASET/test_data_B'

Train the network to extract feature

You can train dla102x and resnet101 by the below comands.

python experiments/DIGIX/dla102x/cgd_margin_loss.py
python experiments/DIGIX/resnet101/cgd_margin_loss.py

To train fishnet99, hrnet_w18 and hrnet_w30, you need to download their imagenet pretrained weights from here. Specifically, download fishnet99_ckpt.tar for fishnet99, download hrnetv2_w18_imagenet_pretrained.pth for hrnet_w18, download hrnetv2_w30_imagenet_pretrained.pth for hrnet_w30. Then you need to move these weights to ~/.cache/torch/hub/checkpoints to make sure torch.hub.load_state_dict_from_url can find them.

Then, you can train fishnet99, hrnet_w18, hrnet_w30 by

python experiments/DIGIX/fishnet99/cgd_margin_loss.py
python experiments/DIGIX/hrnet_w18/cgd_margin_loss.py
python experiments/DIGIX/hrnet_w30/cgd_margin_loss.py

After Training, the model weights can be found in results/DIGIX/{model}/cgd_margin_loss/{time}/transient/checkpoint.final.ckpt. We also provide these weights file.

extract features for retrieval

You can download the pretrained model from here and move them to pretrained directory.

Then, run the below comands.

python experiments/DIGIX_test_B/dla102x/cgd_margin_loss_test_B.py
python experiments/DIGIX_test_B/resnet101/cgd_margin_loss_test_B.py
python experiments/DIGIX_test_B/fishnet99/cgd_margin_loss_test_B.py
python experiments/DIGIX_test_B/hrnet_w18/cgd_margin_loss_test_B.py
python experiments/DIGIX_test_B/hrnet_w30/cgd_margin_loss_test_B.py

When finished, the query feature for test_data_B can be found in results/DIGIX_test_B/{model}/cgd_margin_loss_test_B/{time}/query_feat. And the gallery feature can be found in results/DIGIX_test_B/{model}/cgd_margin_loss_test_B/{time}/gallery_feat.

Post process

You can download features from here. Then, you can put it into the directory features and decompress the files by

tar -xvf DIGIX_test_B_dla102x_5088.tar
tar -xvf DIGIX_test_B_fishnet99_5153.tar
tar -xvf DIGIX_test_B_hrnet_w18_5253.tar
tar -xvf DIGIX_test_B_hrnet_w30_5308.tar
tar -xvf DIGIX_test_B_resnet101_5059.tar

Then the features directory will be organized like this:

|-- DIGIX_test_B_dla102x_5088.tar  
|-- DIGIX_test_B_fishnet99_5153.tar  
|-- DIGIX_test_B_hrnet_w18_5253.tar  
|-- DIGIX_test_B_hrnet_w30_5308.tar  
|-- DIGIX_test_B_resnet101_5059.tar 
|-- DIGIX_test_B_dla102x_5088  
| |-- gallery_feat  
| |-- query_feat  
|-- DIGIX_test_B_fishnet99_5153  
| |-- gallery_feat  
| |-- query_feat  
|-- DIGIX_test_B_hrnet_w18_5253  
| |-- gallery_feat  
| |-- query_feat  
|-- DIGIX_test_B_hrnet_w30_5308  
| |-- gallery_feat  
| |-- query_feat  
|-- DIGIX_test_B_resnet101_5059  
| |-- gallery_feat  
| |-- query_feat  

Now, post process can be executed by

python post_process/rank.py --gpu 0 features/DIGIX_test_B_fishnet99_5153 features/DIGIX_test_B_dla102x_5088 features/DIGIX_test_B_hrnet_w18_5253 features/DIGIX_test_B_hrnet_w30_5308 features/DIGIX_test_B_resnet101_5059
Owner
Guo-Hua Wang
Guo-Hua Wang
Hashformers is a framework for hashtag segmentation with transformers.

Hashtag segmentation is the task of automatically inserting the missing spaces between the words in a hashtag. Hashformers applies Transformer models

Ruan Chaves 41 Nov 09, 2022
🔪 Elimination based Lightweight Neural Net with Pretrained Weights

ELimNet ELimNet: Eliminating Layers in a Neural Network Pretrained with Large Dataset for Downstream Task Removed top layers from pretrained Efficient

snoop2head 4 Jul 12, 2022
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)

Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets This is the official implementation of "Towards Good Pract

Sanja Fidler's Lab 52 Nov 22, 2022
A new version of the CIDACS-RL linkage tool suitable to a cluster computing environment.

Fully Distributed CIDACS-RL The CIDACS-RL is a brazillian record linkage tool suitable to integrate large amount of data with high accuracy. However,

Robespierre Pita 5 Nov 04, 2022
Fast SHAP value computation for interpreting tree-based models

FastTreeSHAP FastTreeSHAP package is built based on the paper Fast TreeSHAP: Accelerating SHAP Value Computation for Trees published in NeurIPS 2021 X

LinkedIn 369 Jan 04, 2023
Implementation of FSGNN

FSGNN Implementation of FSGNN. For more details, please refer to our paper Experiments were conducted with following setup: Pytorch: 1.6.0 Python: 3.8

19 Dec 05, 2022
Github for the conference paper GLOD-Gaussian Likelihood OOD detector

FOOD - Fast OOD Detector Pytorch implamentation of the confernce peper FOOD arxiv link. Abstract Deep neural networks (DNNs) perform well at classifyi

17 Jun 19, 2022
A Demo server serving Bert through ONNX with GPU written in Rust with <3

Demo BERT ONNX server written in rust This demo showcase the use of onnxruntime-rs on BERT with a GPU on CUDA 11 served by actix-web and tokenized wit

Xavier Tao 28 Jan 01, 2023
Efficient Training of Visual Transformers with Small Datasets

Official codes for "Efficient Training of Visual Transformers with Small Datasets", NerIPS 2021.

Yahui Liu 112 Dec 25, 2022
Reinforcement learning for self-driving in a 3D simulation

SelfDrive_AI Reinforcement learning for self-driving in a 3D simulation (Created using UNITY-3D) 1. Requirements for the SelfDrive_AI Gym You need Pyt

Surajit Saikia 17 Dec 14, 2021
Automatically replace ONNX's RandomNormal node with Constant node.

onnx-remove-random-normal This is a script to replace RandomNormal node with Constant node. Example Imagine that we have something ONNX model like the

Masashi Shibata 1 Dec 11, 2021
Deep Q-network learning to play flappybird.

AI Plays Flappy Bird I've trained a DQN that learns to play flappy bird on it's own. Try the pre-trained model First install the pip requirements and

Anish Shrestha 3 Mar 01, 2022
Low Complexity Channel estimation with Neural Network Solutions

Interpolation-ResNet Invited paper for WSA 2021, called 'Low Complexity Channel estimation with Neural Network Solutions'. Low complexity residual con

Dianxin 10 Dec 10, 2022
ShuttleNet: Position-aware Fusion of Rally Progress and Player Styles for Stroke Forecasting in Badminton (AAAI'22)

ShuttleNet: Position-aware Rally Progress and Player Styles Fusion for Stroke Forecasting in Badminton (AAAI 2022) Official code of the paper ShuttleN

Wei-Yao Wang 11 Nov 30, 2022
A variational Bayesian method for similarity learning in non-rigid image registration (CVPR 2022)

A variational Bayesian method for similarity learning in non-rigid image registration We provide the source code and the trained models used in the re

daniel grzech 14 Nov 21, 2022
Easy-to-use,Modular and Extendible package of deep-learning based CTR models .

DeepCTR DeepCTR is a Easy-to-use,Modular and Extendible package of deep-learning based CTR models along with lots of core components layers which can

浅梦 6.6k Jan 08, 2023
Implementation of Research Paper "Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation"

Zero-DCE and Zero-DCE++(Lite architechture for Mobile and edge Devices) Papers Abstract The paper presents a novel method, Zero-Reference Deep Curve E

Tauhid Khan 15 Dec 10, 2022
High performance distributed framework for training deep learning recommendation models based on PyTorch.

High performance distributed framework for training deep learning recommendation models based on PyTorch.

340 Dec 30, 2022
Jetson Nano-based smart camera system that measures crowd face mask usage in real-time.

MaskCam MaskCam is a prototype reference design for a Jetson Nano-based smart camera system that measures crowd face mask usage in real-time, with all

BDTI 212 Dec 29, 2022
A general-purpose programming language, focused on simplicity, safety and stability.

The Rivet programming language A general-purpose programming language, focused on simplicity, safety and stability. Rivet's goal is to be a very power

The Rivet programming language 17 Dec 29, 2022