The 2nd place solution of 2021 google landmark retrieval on kaggle.

Overview

Google_Landmark_Retrieval_2021_2nd_Place_Solution

The 2nd place solution of 2021 google landmark retrieval on kaggle.

Environment

We use cuda 11.1/python 3.7/torch 1.9.1/torchvision 0.8.1 for training and testing.

Download imagenet pretrained model ResNeXt101ibn and SEResNet101ibn from IBN-Net. ResNest101 and ResNeSt269 can be found in ResNest.

Prepare data

  1. Download GLDv2 full version from the official site.

  2. Run python tools/generate_gld_list.py. This will generate clean, c2x, trainfull and all data for different stage of training.

  3. Validation annotation comes from all 1129 images in GLDv2. We expand the competition index set to index_expand. Each query could find all its GTs in the expanded index set and the validation could be more accurate.

Train

We use 8 GPU (32GB/16GB) for training. The evaluation metric in landmark retrieval is different from person re-identification. Due to the validation scale, we skip the validation stage during training and just use the model from last epoch for evaluation.

Fast Train Script

To make quick experiments, we provide scripts for R50_256 trained for clean subset. This setting trains very fast and is helpful for debug.

python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 55555 --max_restarts 0 train.py --config_file configs/GLDv2/R50_256.yml

Whole Train Pipeline

The whole training pipeline for SER101ibn backbone is listed below. Other backbones and input size can be modified accordingly.

python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 55555 --max_restarts 0 train.py --config_file configs/GLDv2/SER101ibn_384.yml
python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 55555 --max_restarts 0 train.py --config_file configs/GLDv2/SER101ibn_384_finetune.yml
python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 55555 --max_restarts 0 train.py --config_file configs/GLDv2/SER101ibn_512_finetune.yml
python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 55555 --max_restarts 0 train.py --config_file configs/GLDv2/SER101ibn_512_all.yml

Inference(notebooks)

  • With four models trained, cd to submission/code/ and modify settings in landmark_retrieval.py properly.

  • Then run eval_retrieval.sh to get submission file and evaluate on validation set offline.

General Settings

REID_EXTRACT_FLAG: Skip feature extraction when using offline code.
FEAT_DIR: Save cached features.
IMAGE_DIR: competition image dir. We make a soft link for competition data at submission/input/landmark-retrieval-2021/
RAW_IMAGE_DIR: origin GLDv2 dir
MODEL_DIR: the latest models for submission
META_DIR: saves meta files for rerank purpose
LOCAL_MATCHING and KR_FLAG disabled for our submission.

Fast Inference Script

Use R50_256 model trained from clean subset correspongding to the fast train script. Set CATEGORY_RERANK and REF_SET_EXTRACT to False. You will get about mAP=32.84% for the validation set.

Whole Inference Pipeline

  • Copy cache_all_list.pkl, cache_index_train_list.pkl and cache_full_list.pkl from cache to submission/input/meta-data-final

  • Set REF_SET_EXTRACT to True to extract features for all images of GLDv2. This will save about 4.9 million 512 dim features for each model in submission/input/meta-data-final.

  • Set REF_SET_EXTRACT to False and CATEGORY_RERANK to before_merge. This will load the precomputed features and run the proposed Landmark-Country aware rerank.

  • The notebooks on kaggle is exactly the same file as in base_landmark.py and landmark_retrieval.py. We also upload the same notebooks as in kaggle in kaggle.ipynb.

Kaggle and ICCV workshops

  • The challenge is held on kaggle and the leaderboard can be found here. We rank 2nd(2/263) in this challenge.

  • Kaggle Discussion post link here

  • ICCV workshop slides coming soon.

Thanks

The code is motivated by AICITY2021_Track2_DMT, 2020_1st_recognition_solution, 2020_2nd_recognition_solution, 2020_1st_retrieval_solution.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{zhang2021landmark,
 title={2nd Place Solution to Google Landmark Retrieval 2021},
 author={Zhang, Yuqi and Xu, Xianzhe and Chen, Weihua and Wang, Yaohua and Zhang, Fangyi},
 year={2021}
}
[ICML 2020] Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Control

PG-MORL This repository contains the implementation for the paper Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Contro

MIT Graphics Group 65 Jan 07, 2023
BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition 2022)

BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition

Rui Qian 17 Dec 12, 2022
Hypercomplex Neural Networks with PyTorch

HyperNets Hypercomplex Neural Networks with PyTorch: this repository would be a container for hypercomplex neural network modules to facilitate resear

Eleonora Grassucci 21 Dec 27, 2022
Linear image-to-image translation

Linear (Un)supervised Image-to-Image Translation Examples for linear orthogonal transformations in PCA domain, learned without pairing supervision. Tr

Eitan Richardson 40 Aug 31, 2022
A video scene detection algorithm is designed to detect a variety of different scenes within a video

Scene-Change-Detection - A video scene detection algorithm is designed to detect a variety of different scenes within a video. There is a very simple definition for a scene: It is a series of logical

1 Jan 04, 2022
Evaluating Cross-lingual Sentence Representations

XNLI: The Cross-Lingual NLI Corpus XNLI is an evaluation corpus for language transfer and cross-lingual sentence classification in 15 languages. New:

Meta Research 395 Dec 19, 2022
Tutorial to set up TensorFlow Object Detection API on the Raspberry Pi

A tutorial showing how to set up TensorFlow's Object Detection API on the Raspberry Pi

Evan 1.1k Dec 26, 2022
All supplementary material used by me while TA-ing CS3244: Machine Learning

CS3244-Tutorial-Material All supplementary material used by me while TA-ing CS3244: Machine Learning at NUS School of Computing. What is this? I teach

Rishabh Anand 18 Sep 23, 2022
Assessing the Influence of Models on the Performance of Reinforcement Learning Algorithms applied on Continuous Control Tasks

Assessing the Influence of Models on the Performance of Reinforcement Learning Algorithms applied on Continuous Control Tasks This is the master thesi

Giacomo Arcieri 1 Mar 21, 2022
Code To Tune or Not To Tune? Zero-shot Models for Legal Case Entailment.

COLIEE 2021 - task 2: Legal Case Entailment This repository contains the code to reproduce NeuralMind's submissions to COLIEE 2021 presented in the pa

NeuralMind 13 Dec 16, 2022
PyTorch implementation of EigenGAN

PyTorch Implementation of EigenGAN Train python train.py [image_folder_path] --name [experiment name] Test python test.py [ckpt path] --traverse FFH

62 Nov 12, 2022
audioLIME: Listenable Explanations Using Source Separation

audioLIME This repository contains the Python package audioLIME, a tool for creating listenable explanations for machine learning models in music info

Institute of Computational Perception 27 Dec 01, 2022
DziriBERT: a Pre-trained Language Model for the Algerian Dialect

DziriBERT DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian

117 Jan 07, 2023
Simple implementation of OpenAI CLIP model in PyTorch.

It was in January of 2021 that OpenAI announced two new models: DALL-E and CLIP, both multi-modality models connecting texts and images in some way. In this article we are going to implement CLIP mod

Moein Shariatnia 226 Jan 05, 2023
Code for Reciprocal Adversarial Learning for Brain Tumor Segmentation: A Solution to BraTS Challenge 2021 Segmentation Task

BRATS 2021 Solution For Segmentation Task This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmenta

Himashi Amanda Peiris 6 Sep 15, 2022
Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling

RHGN Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling Dependencies torch==1.6.0 torchvision==0.7.0 dgl==0.7.1

Big Data and Multi-modal Computing Group, CRIPAC 6 Nov 29, 2022
Wide Residual Networks (WideResNets) in PyTorch

Wide Residual Networks (WideResNets) in PyTorch WideResNets for CIFAR10/100 implemented in PyTorch. This implementation requires less GPU memory than

Jason Kuen 296 Dec 27, 2022
A map update dataset and benchmark

MUNO21 MUNO21 is a dataset and benchmark for machine learning methods that automatically update and maintain digital street map datasets. Previous dat

16 Nov 30, 2022
PaddleBoBo是基于PaddlePaddle和PaddleSpeech、PaddleGAN等开发套件的虚拟主播快速生成项目

PaddleBoBo - 元宇宙时代,你也可以动手做一个虚拟主播。 PaddleBoBo是基于飞桨PaddlePaddle深度学习框架和PaddleSpeech、PaddleGAN等开发套件的虚拟主播快速生成项目。PaddleBoBo致力于简单高效、可复用性强,只需要一张带人像的图片和一段文字,就能

502 Jan 08, 2023
McGill Physics Hackathon 2021: Reaction-Diffusion Models for the Generation of Biological Patterns

DiffuseAnimals: Reaction-Diffusion Models for the Generation of Biological Patterns Introduction Reaction-diffusion equations can be utilized in order

Austin Szuminsky 2 Mar 07, 2022