[ICCV 2021] Official PyTorch implementation for Deep Relational Metric Learning.

Related tags

Deep LearningDRML
Overview

Deep Relational Metric Learning

This repository is the official PyTorch implementation of Deep Relational Metric Learning.

Framework

AEL

DRML

Datasets

CUB-200-2011

Download from here.

Organize the dataset as follows:

- cub200
    |- train
    |   |- class0
    |   |   |- image0_1
    |   |   |- ...
    |   |- ...
    |- test
        |- class100
        |   |- image100_1
        |   |- ...
        |- ...

Cars196

Download from here.

Organize the dataset as follows:

- cars196
    |- train
    |   |- class0
    |   |   |- image0_1
    |   |   |- ...
    |   |- ...
    |- test
        |- class98
        |   |- image98_1
        |   |- ...
        |- ...

Requirements

To install requirements:

pip install -r requirements.txt

Training

Baseline models

To train the baseline model with the ProxyAnchor loss on CUB200, run this command:

CUDA_VISIBLE_DEVICES=0 python examples/train/main.py \
--save_name <experiment-name> \
--data_path <path-of-data> \
--phase train \
--device 0 \
--setting proxy_baseline \
--dataset cub200 \
--num_classes 100 \
--batch_size 120 \
--delete_old

To train the baseline model with the ProxyAnchor loss on Cars196, run this command:

CUDA_VISIBLE_DEVICES=0 python examples/train/main.py \
--save_name <experiment-name> \
--data_path <path-of-data> \
--phase train \
--device 0 \
--setting proxy_baseline \
--dataset cars196 \
--num_classes 98 \
--batch_size 120 \
--delete_old

DRML models

To train the proposed DRML model using the ProxyAnchor loss on CUB200 in the paper, run this command:

CUDA_VISIBLE_DEVICES=0 python examples/train/main.py \
--save_name <experiment-name> \
--data_path <path-of-data> \
--phase train \
--device 0 \
--setting proxy \
--dataset cub200 \
--num_classes 100 \
--batch_size 120 \
--delete_old

To train the proposed DRML model using the ProxyAnchor loss on Cars196 in the paper, run this command:

CUDA_VISIBLE_DEVICES=0 python examples/train/main.py \
--save_name <experiment-name> \
--data_path <path-of-data> \
--phase train \
--device 0 \
--setting proxy \
--dataset cars196 \
--num_classes 98 \
--batch_size 120 \
--delete_old

Device

We tested our code on a linux machine with an Nvidia RTX 3090 GPU card. We recommend using a GPU card with a memory > 8GB (BN-Inception + batch-size of 120 ).

Results

The baseline models achieve the following performances:

Model name Recall @ 1 Recall @ 2 Recall @ 4 Recall @ 8 NMI
cub200-ProxyAnchor-baseline 67.3 77.7 85.7 91.4 68.7
cars196-ProxyAnchor-baseline 84.4 90.7 94.3 96.8 69.7

Our models achieve the following performances:

Model name Recall @ 1 Recall @ 2 Recall @ 4 Recall @ 8 NMI
cub200-ProxyAnchor-ours 68.7 78.6 86.3 91.6 69.3
cars196-ProxyAnchor-ours 86.9 92.1 95.2 97.4 72.1

COMING SOON

  • We will upload the code for cross-validation setting soon.
  • We will update the optimal hyper-parameters of the experiments soon.
Owner
Borui Zhang
I am a first year Ph.D student in the Department of Automation at THU. My research direction is computer vision.
Borui Zhang
[UNMAINTAINED] Automated machine learning for analytics & production

auto_ml Automated machine learning for production and analytics Installation pip install auto_ml Getting started from auto_ml import Predictor from au

Preston Parry 1.6k Jan 02, 2023
This is an official implementation for the WTW Dataset in "Parsing Table Structures in the Wild " on table detection and table structure recognition.

WTW-Dataset This is an official implementation for the WTW Dataset in "Parsing Table Structures in the Wild " on ICCV 2021. Here, you can download the

109 Dec 29, 2022
PyTorch implementation of PP-LCNet

PP-LCNet-Pytorch Pre-Trained Models Google Drive p018 Accuracy Models Top1 Top5 PPLCNet_x0_25 0.5186 0.7565 PPLCNet_x0_35 0.5809 0.8083 PPLCNet_x0_5 0

24 Dec 12, 2022
Run Effective Large Batch Contrastive Learning on Limited Memory GPU

Gradient Cache Gradient Cache is a simple technique for unlimitedly scaling contrastive learning batch far beyond GPU memory constraint. This means tr

Luyu Gao 198 Dec 29, 2022
Exporter for Storage Area Network (SAN)

SAN Exporter Prometheus exporter for Storage Area Network (SAN). We all know that each SAN Storage vendor has their own glossary of terms, health/perf

vCloud 32 Dec 16, 2022
PyTorch Implementations for DeeplabV3 and PSPNet

Pytorch-segmentation-toolbox DOC Pytorch code for semantic segmentation. This is a minimal code to run PSPnet and Deeplabv3 on Cityscape dataset. Shor

Zilong Huang 746 Dec 15, 2022
Materials for my scikit-learn tutorial

Scikit-learn Tutorial Jake VanderPlas email: [email protected] twitter: @jakevdp gith

Jake Vanderplas 1.6k Dec 30, 2022
Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming

Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification.

YerevaNN 75 Nov 06, 2022
FedGS: A Federated Group Synchronization Framework Implemented by LEAF-MX.

FedGS: Data Heterogeneity-Robust Federated Learning via Group Client Selection in Industrial IoT Preparation For instructions on generating data, plea

Lizonghang 9 Dec 22, 2022
Python implementation of a live deep learning based age/gender/expression recognizer

TUT live age estimator Python implementation of a live deep learning based age/gender/smile/celebrity twin recognizer. All components use convolutiona

Heikki Huttunen 80 Nov 21, 2022
This is a Image aid classification software based on python TK library development

This is a Image aid classification software based on python TK library development.

EasonChan 1 Jan 17, 2022
Official re-implementation of the Calibrated Adversarial Refinement model described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Official re-implementation of the Calibrated Adversarial Refinement model described in the paper Calibrated Adversarial Refinement for Stochastic Semantic Segmentation

Elias Kassapis 31 Nov 22, 2022
A PyTorch Implementation of "Watch Your Step: Learning Node Embeddings via Graph Attention" (NeurIPS 2018).

Attention Walk ⠀⠀ A PyTorch Implementation of Watch Your Step: Learning Node Embeddings via Graph Attention (NIPS 2018). Abstract Graph embedding meth

Benedek Rozemberczki 303 Dec 09, 2022
Survival analysis in Python

What is survival analysis and why should I learn it? Survival analysis was originally developed and applied heavily by the actuarial and medical commu

Cameron Davidson-Pilon 2k Jan 08, 2023
Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)

Learning-Action-Completeness-from-Points Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal A

Pilhyeon Lee 67 Jan 03, 2023
Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation" in EMNLP 2021

Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation" in EMNLP 2021

Mozhdeh Gheini 16 Jul 16, 2022
blind SQLIpy sebuah alat injeksi sql yang menggunakan waktu sql untuk mendapatkan sebuah server database.

blind SQLIpy Alat blind SQLIpy ini merupakan alat injeksi sql yang menggunakan metode time based blind sql injection metode tersebut membutuhkan waktu

Galih Anggoro Prasetya 4 Feb 24, 2022
PyTorch implementation for "Mining Latent Structures with Contrastive Modality Fusion for Multimedia Recommendation"

MIRCO PyTorch implementation for paper: Latent Structures Mining with Contrastive Modality Fusion for Multimedia Recommendation Dependencies Python 3.

Big Data and Multi-modal Computing Group, CRIPAC 9 Dec 08, 2022
Official implementation of "Robust channel-wise illumination estimation"

This repository provides the official implementation of "Robust channel-wise illumination estimation." accepted in BMVC (2021).

Firas Laakom 4 Nov 08, 2022
PyTorch implementation of Value Iteration Networks (VIN): Clean, Simple and Modular. Visualization in Visdom.

VIN: Value Iteration Networks This is an implementation of Value Iteration Networks (VIN) in PyTorch to reproduce the results.(TensorFlow version) Key

Xingdong Zuo 215 Dec 07, 2022