The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track.

Overview

ISC21-Descriptor-Track-1st

The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track.

You can check our solution tech report from: Contrastive Learning with Large Memory Bank and Negative Embedding Subtraction for Accurate Copy Detection

setup

OS

Ubuntu 18.04

CUDA Version

11.1

environment

Run this for python env

conda env create -f environment.yml

data download

mkdir -p input/{query,reference,train}_images
aws s3 cp s3://drivendata-competition-fb-isc-data/all/query_images/ input/query_images/ --recursive --no-sign-request
aws s3 cp s3://drivendata-competition-fb-isc-data/all/reference_images/ input/reference_images/ --recursive --no-sign-request
aws s3 cp s3://drivendata-competition-fb-isc-data/all/train_images/ input/train_images/ --recursive --no-sign-request
aws s3 cp s3://drivendata-competition-fb-isc-data/all/query_images_phase2/ input/query_images_phase2/ --recursive --no-sign-request

train

Run below lines step by step.

cd exp

CUDA_VISIBLE_DEVICES=0,1,2,3 python v83.py \
  -a tf_efficientnetv2_m_in21ft1k --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 --seed 9 \
  --epochs 5 --lr 0.1 --wd 1e-6 --batch-size 128 --ncrops 2 \
  --gem-p 1.0 --pos-margin 0.0 --neg-margin 1.0 \
  --input-size 256 --sample-size 1000000 --memory-size 20000 \
  ../input/training_images/
CUDA_VISIBLE_DEVICES=0,1,2,3 python v83.py \
  -a tf_efficientnetv2_m_in21ft1k --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 --seed 90 \
  --epochs 10 --lr 0.1 --wd 1e-6 --batch-size 128 --ncrops 2 \
  --gem-p 1.0 --pos-margin 0.0 --neg-margin 1.0 \
  --input-size 256 --sample-size 1000000 --memory-size 20000 \
  --resume ./v83/train/checkpoint_0004.pth.tar \
  ../input/training_images/

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python v86.py \
  -a tf_efficientnetv2_m_in21ft1k --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 --seed 99 \
  --epochs 7 --lr 0.1 --wd 1e-6 --batch-size 128 --ncrops 2 \
  --gem-p 1.0 --pos-margin 0.0 --neg-margin 1.0 \
  --input-size 384 --sample-size 1000000 --memory-size 20000 --weight ./v83/train/checkpoint_0005.pth.tar \
  ../input/training_images/

python v98.py \
  -a tf_efficientnetv2_m_in21ft1k --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 --seed 999 \
  --epochs 3 --lr 0.1 --wd 1e-6 --batch-size 64 --ncrops 2 \
  --gem-p 1.0 --pos-margin 0.0 --neg-margin 1.0 --weight ./v86/train/checkpoint_0005.pth.tar \
  --input-size 512 --sample-size 1000000 --memory-size 20000 \
  ../input/training_images/

python v107.py \
  -a tf_efficientnetv2_m_in21ft1k --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 --seed 99999 \
  --epochs 10 --lr 0.5 --wd 1e-6 --batch-size 16 --ncrops 2 \
  --gem-p 1.0 --pos-margin 0.0 --neg-margin 1.1 --weight ./v98/train/checkpoint_0001.pth.tar \
  --input-size 512 --sample-size 1000000 --memory-size 1000 \
  ../input/training_images/

The final model weight can be downloaded from here: https://drive.google.com/file/d/1ySea-NJp_J0aWvma_WmVbc3Hnwf5LHUf/view?usp=sharing You can execute inference code without run training with this model weight. To locate the model weight to suitable location, run following commands after downloaded the model weight.

mkdir -p exp/v107/train
mv checkpoint_009.pth.tar exp/v107/train/

inference

Note that faiss doesn't work with A100, so I used 4x GTX 1080 Ti for post-process.

cd exp

python v107.py -a tf_efficientnetv2_m_in21ft1k --batch-size 128 --mode extract --gem-eval-p 1.0 --weight ./v107/train/checkpoint_0009.pth.tar --input-size 512 --target-set qrt ../input/

# this script generates final prediction result files
python ../scripts/postprocess.py

Submission files are outputted here:

  • exp/v107/extract/v107_iso.h5 # descriptor track
  • exp/v107/extract/v107_iso.csv # matching track

descriptor track local evaluation score:

{
  "average_precision": 0.9479039085717805,
  "recall_p90": 0.9192546583850931
}
Comments
  • Bugs?

    Bugs?

    Congratulations! We really appreciate the work. When I run the

    python v107.py \
      -a tf_efficientnetv2_m_in21ft1k --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 --seed 99999 \
      --epochs 10 --lr 0.5 --wd 1e-6 --batch-size 16 --ncrops 2 \
      --gem-p 1.0 --pos-margin 0.0 --neg-margin 1.1 --weight ./v98/train/checkpoint_0001.pth.tar \
      --input-size 512 --sample-size 1000000 --memory-size 1000 \
      ../input/training_images/
    

    I come across

    Traceback (most recent call last):                                              
      File "v107.py", line 774, in <module>
        train(args)
      File "v107.py", line 425, in train
        mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
      File "/home/wangwenhao/anaconda3/envs/ISC/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
        return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
      File "/home/wangwenhao/anaconda3/envs/ISC/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
        while not context.join():
      File "/home/wangwenhao/anaconda3/envs/ISC/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 150, in join
        raise ProcessRaisedException(msg, error_index, failed_process.pid)
    torch.multiprocessing.spawn.ProcessRaisedException: 
    
    -- Process 5 terminated with the following error:
    Traceback (most recent call last):
      File "/home/wangwenhao/anaconda3/envs/ISC/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
        fn(i, *args)
      File "/home/wangwenhao/fbisc-descriptor-1st/exp/v107.py", line 573, in main_worker
        train_one_epoch(train_loader, model, loss_fn, optimizer, scaler, epoch, args)
      File "/home/wangwenhao/fbisc-descriptor-1st/exp/v107.py", line 595, in train_one_epoch
        labels = torch.cat([torch.tile(i, dims=(args.ncrops,)), torch.tensor(j)])
    ValueError: only one element tensors can be converted to Python scalars
    

    Do you know how to fix it? Thanks.

    opened by WangWenhao0716 14
  • data augment is wrong

    data augment is wrong

    train_dataset = ISCDataset(
        train_paths,
        NCropsTransform(
            transforms.Compose(aug_moderate),
            transforms.Compose(aug_hard),
            args.ncrops,
        ),
    )
    

    error log: apply_transform() takes from 2 to 3 positional arguments but 5 were given

    opened by AItechnology 5
  • Cannot load state dict for model

    Cannot load state dict for model

    Thanks for your amazing work. But I encounter a problem, when I use checkpoint_0009.pth.tar checkpoint,

    • When I don't remove model = nn.DataParallel(model), I encouter error:
            size mismatch for module.backbone.bn1.weight: copying a param with shape torch.Size([24]) from checkpoint, the shape in current model is 
    torch.Size([64]).
            size mismatch for module.backbone.bn1.bias: copying a param with shape torch.Size([24]) from checkpoint, the shape in current model is torch.Size([64]).
            size mismatch for module.backbone.bn1.running_mean: copying a param with shape torch.Size([24]) from checkpoint, the shape in current model is torch.Size([64]).
            size mismatch for module.backbone.bn1.running_var: copying a param with shape torch.Size([24]) from checkpoint, the shape in current model is torch.Size([64]).
            size mismatch for module.fc.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([256, 2048])
    
    • Then I remove line model = nn.DataParallel(model), the model seems to load checkpoint successfully, but I feed same input to model, the output feature vector if different for different time I run. I guess the model is not loaded successfully when load state dict, so model will use the weight initialized randomly.
    • Then I change strict=True in model.load_state_dict(state_dict=state_dict, strict=False), I encounter error RuntimeError: Error(s) in loading state_dict for ISCNet: Missing key(s) in state_dict:, I found that the key of state_dict in model and checkpoint totally diffrent even name pattern. Key of model state dict and checkpoint state dict I attached below. checkpoint.txt model.txt How can I solve the this problem?
    opened by NguyenThanhAI 2
  • Unable to reproduce Stage 1 results

    Unable to reproduce Stage 1 results

    Hi, I attempted to reproduce the Stage 1 training using your provided code, but was unable to obtain the reported muAP of 0.5831. I instead obtained this result at epoch 9 (indexed from 0):

    Average Precision: 0.49554
    Recall at P90    : 0.32701
    Threshold at P90 : -0.375733
    Recall at rank 1:  0.62448
    Recall at rank 10: 0.65961
    

    I also saw that you continued training from epoch 5, but these are the results I obtained at epoch 5:

    Average Precision: 0.47977
    Recall at P90    : 0.32501
    Threshold at P90 : -0.376619
    Recall at rank 1:  0.61409
    Recall at rank 10: 0.64903
    

    Both sets of results were obtained on the private ground truth set of Phase 1, using image size 512. Is it possible to provide some insight as to what is happening here? Thank you.

    opened by avrilwongaw 1
  • about the train output feature

    about the train output feature

    sorry to bother you again. I want train the model with a small backbone such as resnet50. Because I only have three GPU and I run with command:

    CUDA_VISIBLE_DEVICES=0,1,2 python v83.py  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 --seed 9 \
      --epochs 5 --lr 0.1 --wd 1e-6 --batch-size 96 --ncrops 2 \
      --gem-p 1.0 --pos-margin 0.0 --neg-margin 1.0 \
      --input-size 256 --sample-size 1000000 --memory-size 20000 \
    /root/zhx3/data/fb_train_data/train
    

    I find a strange problem. I test checkpoint_000{0..4}.pth.tar model. only the checkpoint_0002.pth.tar ouput different when the input is different. I mean other model will output same embedding no matter what different you input. thanks in advance. the loss log output such as:

    epoch 5:   0%|          | 0/15873 [00:00<?, ?it/s]=> loading checkpoint './v83/train/checkpoint_0004.pth.tar'
    => loaded checkpoint './v83/train/checkpoint_0004.pth.tar' (epoch 5)
    epoch 6:   0%|          | 0/15873 [00:00<?, ?it/s]epoch=5, loss=1.0154363534772417
    epoch 7:   0%|          | 0/15873 [00:00<?, ?it/s]epoch=6, loss=1.012835873522891
    
    opened by Usernamezhx 1
  • about the memory size

    about the memory size

    python v107.py \
      -a tf_efficientnetv2_m_in21ft1k --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 --seed 99999 \
      --epochs 10 --lr 0.5 --wd 1e-6 \
      --gem-p 1.0 --pos-margin 0.0 --neg-margin 1.1 --weight ./v98/train/checkpoint_0001.pth.tar \
      --input-size 512 --sample-size 1000000 --memory-size 1000 \
      ../input/training_images/
    

    why not set the --memory-size large such as 20000 ? thanks in advance

    opened by Usernamezhx 1
  • will v107 overfit for phase2?

    will v107 overfit for phase2?

    Congratulations and thanks for your sharing.

    i find v107 only use the about 5k query-ref pair (i.e. gt in phase1) as positive. How to know whether it overfits for phase2 ?

    opened by liangzimei 1
  • access denied for dataset on aws

    access denied for dataset on aws

    Thanks for you work! I have problems downloading the dataset from the given aws buckets

    $ aws s3 cp s3://drivendata-competition-fb-isc-data/all/query_images/ input/query_images/ --recursive --no-sign-request
    fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
    

    Do I need special permissions to download the data?

    opened by sebastianlutter 0
  • Final optimizer state for the model

    Final optimizer state for the model

    Hello @lyakaap

    Thanks a lot for this work. I am trying to take this and finetune over a certain task. Is it possible you can provide the state of final optimizer after 4th stage of training. We want to try an experiment where it will be very useful.

    Thank you.

    opened by shubhamjain0594 11
Owner
lyakaap
Computer Vision, Deep Learning
lyakaap
Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations

TopClus The source code used for Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations, published in WWW 2022. Requ

Yu Meng 63 Dec 18, 2022
Nightmare-Writeup - Writeup for the Nightmare CTF Challenge from 2022 DiceCTF

Nightmare: One Byte to ROP // Alternate Solution TLDR: One byte write, no leak.

1 Feb 17, 2022
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 21.1k Jan 08, 2023
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Yihui He 1k Jan 03, 2023
EfficientMPC - Efficient Model Predictive Control Implementation

efficientMPC Efficient Model Predictive Control Implementation The original algo

Vin 8 Dec 04, 2022
An Approach to Explore Logistic Regression Models

User-centered Regression An Approach to Explore Logistic Regression Models This tool applies the potential of Attribute-RadViz in identifying correlat

0 Nov 12, 2021
A Real-World Benchmark for Reinforcement Learning based Recommender System

RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System RL4RS is a real-world deep reinforcement learning recommender system

121 Dec 01, 2022
A machine learning project which can detect and predict the skin disease through image recognition.

ML-Project-2021 A machine learning project which can detect and predict the skin disease through image recognition. The dataset used for this is the H

Debshishu Ghosh 1 Jan 13, 2022
Save-restricted-v-3 - Save restricted content Bot For telegram

Save restricted content Bot Contact: Telegram A stable telegram bot to get restr

DEVANSH 11 Dec 21, 2022
FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data

FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data, a relatively complete set of integrated multi-source data download terminal software fast is developed. The softw

ChangChuntao 23 Dec 31, 2022
Serverless proxy for Spark cluster

Hydrosphere Mist Hydrosphere Mist is a serverless proxy for Spark cluster. Mist provides a new functional programming framework and deployment model f

hydrosphere.io 317 Dec 01, 2022
MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python

Digital Image Processing Python MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python TO-DO: Refactor scripts, curren

Merve Noyan 24 Oct 16, 2022
Replication Code for "Self-Supervised Bug Detection and Repair" NeurIPS 2021

Self-Supervised Bug Detection and Repair This is the reference code to replicate the research in Self-Supervised Bug Detection and Repair in NeurIPS 2

Microsoft 85 Dec 24, 2022
Algorithms for outlier, adversarial and drift detection

Alibi Detect is an open source Python library focused on outlier, adversarial and drift detection. The package aims to cover both online and offline d

Seldon 1.6k Dec 31, 2022
Implementation of "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement" by pytorch

This repository is used to suspend the results of our paper "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement"

ScorpioMiku 19 Sep 30, 2022
Multistream CNN for Robust Acoustic Modeling

Multistream Convolutional Neural Network (CNN) A multistream CNN is a novel neural network architecture for robust acoustic modeling in speech recogni

ASAPP Research 37 Sep 21, 2022
RATCHET is a Medical Transformer for Chest X-ray Diagnosis and Reporting

RATCHET: RAdiological Text Captioning for Human Examined Thoraxes RATCHET is a Medical Transformer for Chest X-ray Diagnosis and Reporting. Based on t

26 Nov 14, 2022
Analysis of Smiles through reservoir sampling & RDkit

Analysis of Smiles through reservoir sampling and machine learning (under development). This is a simple project that includes two Jupyter files for t

Aurimas A. Nausėdas 6 Aug 30, 2022
Visualizing lattice vibration information from phonon dispersion to atoms (For GPUMD)

Phonon-Vibration-Viewer (For GPUMD) Visualizing lattice vibration information from phonon dispersion for primitive atoms. In this tutorial, we will in

Liangting 6 Dec 10, 2022
This is the official repository of Music Playlist Title Generation: A Machine-Translation Approach.

PlyTitle_Generation This is the official repository of Music Playlist Title Generation: A Machine-Translation Approach. The paper has been accepted by

SeungHeonDoh 6 Jan 03, 2022