Personal implementation of paper "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval"

Overview

Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval

This repo provides personal implementation of paper Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval in a simplified way. The code is refered to official version of ANCE.

Environment

'transformers==2.3.0' 
'pytrec-eval'
'faiss-cpu'
'wget'
'python==3.6.*'

Data Download & Preprocessing

To download all the needed data, run:

bash commands/data_download.sh 

Data Preprocessing

The command to preprocess passage and document data is listed below:

python data/msmarco_data.py 
--data_dir $raw_data_dir \
--out_data_dir $preprocessed_data_dir \ 
--model_type {use rdot_nll for ANCE FirstP, rdot_nll_multi_chunk for ANCE MaxP} \ 
--model_name_or_path roberta-base \ 
--max_seq_length {use 512 for ANCE FirstP, 2048 for ANCE MaxP} \ 
--data_type {use 1 for passage, 0 for document}

The data preprocessing command is included as the first step in the training command file commands/run_train.sh

Warmup for Training

ANCE training starts from a pretrained BM25 warmup checkpoint. The command with our used parameters to train this warmup checkpoint is in commands/run_train_warmup.py and is shown below:

    python3 -m torch.distributed.launch --nproc_per_node=1 ../drivers/run_warmup.py \
    --train_model_type rdot_nll \
    --model_name_or_path roberta-base \
    --task_name MSMarco \
    --do_train \
    --evaluate_during_training \
    --data_dir ${location of your raw data}  
    --max_seq_length 128 
    --per_gpu_eval_batch_size=256 \
    --per_gpu_train_batch_size=32 \
    --learning_rate 2e-4  \
    --logging_steps 100   \
    --num_train_epochs 2.0  \
    --output_dir ${location for checkpoint saving} \
    --warmup_steps 1000  \
    --overwrite_output_dir \
    --save_steps 30000 \
    --gradient_accumulation_steps 1 \
    --expected_train_size 35000000 \
    --logging_steps_per_eval 1 \
    --fp16 \
    --optimizer lamb \
    --log_dir ~/tensorboard/${DLWS_JOB_ID}/logs/OSpass

Training

To train the model(s) in the paper, you need to start two commands in the following order:

  1. run commands/run_train.sh which does three things in a sequence:

    a. Data preprocessing: this is explained in the previous data preprocessing section. This step will check if the preprocess data folder exists, and will be skipped if the checking is positive.

    b. Initial ANN data generation: this step will use the pretrained BM25 warmup checkpoint to generate the initial training data. The command is as follow:

     python -m torch.distributed.launch --nproc_per_node=$gpu_no ../drivers/run_ann_data_gen.py 
     --training_dir {# checkpoint location, not used for initial data generation} \ 
     --init_model_dir {pretrained BM25 warmup checkpoint location} \ 
     --model_type rdot_nll \
     --output_dir $model_ann_data_dir \
     --cache_dir $model_ann_data_dir_cache \
     --data_dir $preprocessed_data_dir \
     --max_seq_length 512 \
     --per_gpu_eval_batch_size 16 \
     --topk_training {top k candidates for ANN search(ie:200)} \ 
     --negative_sample {negative samples per query(20)} \ 
     --end_output_num 0 # only set as 0 for initial data generation, do not set this otherwise
    

    c. Training: ANCE training with the most recently generated ANN data, the command is as follow:

     python -m torch.distributed.launch --nproc_per_node=$gpu_no ../drivers/run_ann.py 
     --model_type rdot_nll \
     --model_name_or_path $pretrained_checkpoint_dir \
     --task_name MSMarco \
     --triplet {# default = False, action="store_true", help="Whether to run training}\ 
     --data_dir $preprocessed_data_dir \
     --ann_dir {location of the ANN generated training data} \ 
     --max_seq_length 512 \
     --per_gpu_train_batch_size=8 \
     --gradient_accumulation_steps 2 \
     --learning_rate 1e-6 \
     --output_dir $model_dir \
     --warmup_steps 5000 \
     --logging_steps 100 \
     --save_steps 10000 \
     --optimizer lamb 
    
  2. Once training starts, start another job in parallel to fetch the latest checkpoint from the ongoing training and update the training data. To do that, run

     bash commands/run_ann_data_gen.sh
    

    The command is similar to the initial ANN data generation command explained previously

Inference

The command for inferencing query and passage/doc embeddings is the same as that for Initial ANN data generation described above as the first step in ANN data generation is inference. However you need to add --inference to the command to have the program to stop after the initial inference step. commands/run_inference.sh provides a sample command.

Evaluation

The evaluation is done through "Calculate Metrics.ipynb". This notebook calculates full ranking and reranking metrics used in the paper including NDCG, MRR, hole rate, recall for passage/document, dev/eval set specified by user. In order to run it, you need to define the following parameters at the beginning of the Jupyter notebook.

    checkpoint_path = {location for dumpped query and passage/document embeddings which is output_dir from run_ann_data_gen.py}
    checkpoint =  {embedding from which checkpoint(ie: 200000)}
    data_type =  {0 for document, 1 for passage}
    test_set =  {0 for MSMARCO dev_set, 1 for TREC eval_set}
    raw_data_dir = 
    processed_data_dir = 

ANCE VS DPR on OpenQA Benchmarks

We also evaluate ANCE on the OpenQA benchmark used in a parallel work (DPR). At the time of our experiment, only the pre-processed NQ and TriviaQA data are released. Our experiments use the two released tasks and inherit DPR retriever evaluation. The evaluation uses the [email protected]/100 which is whether the Top-20/100 retrieved passages include the answer. We explain the steps to reproduce our results on OpenQA Benchmarks in this section.

Download data

commands/data_download.sh takes care of this step.

ANN data generation & ANCE training

Following the same training philosophy discussed before, the ann data generation and ANCE training for OpenQA require two parallel jobs.

  1. We need to preprocess data and generate an initial training set for ANCE to start training. The command for that is provided in:
commands/run_ann_data_gen_dpr.sh

We keep this data generation job running after it creates an initial training set as it will later keep generating training data with newest checkpoints from the training process.

  1. After an initial training set is generated, we start an ANCE training job with commands provided in:
commands/run_train_dpr.sh

During training, the evaluation metrics will be printed to tensorboards each time it receives new training data. Alternatively, you could check the metrics in the dumped file "ann_ndcg_#" in the directory specified by "model_ann_data_dir" in commands/run_ann_data_gen_dpr.sh each time new training data is generated.

Results

The run_train.sh and run_ann_data_gen.sh files contain the command with the parameters we used for passage ANCE(FirstP), document ANCE(FirstP) and document ANCE(MaxP) Our model achieves the following performance on MSMARCO dev set and TREC eval set :

MSMARCO Dev Passage Retrieval [email protected] [email protected] Steps
ANCE(FirstP) 0.330 0.959 600K
ANCE(MaxP) - - -
TREC DL Passage [email protected] Rerank Retrieval Steps
ANCE(FirstP) 0.677 0.648 600K
ANCE(MaxP) - - -
TREC DL Document [email protected] Rerank Retrieval Steps
ANCE(FirstP) 0.641 0.615 210K
ANCE(MaxP) 0.671 0.628 139K
MSMARCO Dev Passage Retrieval [email protected] Steps
pretrained BM25 warmup checkpoint 0.311 60K
ANCE Single-task Training Top-20 Top-100 Steps
NQ 81.9 87.5 136K
TriviaQA 80.3 85.3 100K
ANCE Multi-task Training Top-20 Top-100 Steps
NQ 82.1 87.9 300K
TriviaQA 80.3 85.2 300K

Click the steps in the table to download the corresponding checkpoints.

Our result for document ANCE(FirstP) TREC eval set top 100 retrieved document per query could be downloaded here. Our result for document ANCE(MaxP) TREC eval set top 100 retrieved document per query could be downloaded here.

The TREC eval set query embedding and their ids for our passage ANCE(FirstP) experiment could be downloaded here. The TREC eval set query embedding and their ids for our document ANCE(FirstP) experiment could be downloaded here. The TREC eval set query embedding and their ids for our document 2048 ANCE(MaxP) experiment could be downloaded here.

The t-SNE plots for all the queries in the TREC document eval set for ANCE(FirstP) could be viewed here.

run_train.sh and run_ann_data_gen.sh files contain the commands with the parameters we used for passage ANCE(FirstP), document ANCE(FirstP) and document 2048 ANCE(MaxP) to reproduce the results in this section. run_train_warmup.sh contains the commands to reproduce the results for the pretrained BM25 warmup checkpoint in this section

Note the steps to reproduce similar results as shown in the table might be a little different due to different synchronizing between training and ann data generation processes and other possible environment differences of the user experiments.

Owner
John
My research interests are machine learning and recommender systems.
John
A new play-and-plug method of controlling an existing generative model with conditioning attributes and their compositions.

Viz-It Data Visualizer Web-Application If I ask you where most of the data wrangler looses their time ? It is Data Overview and EDA. Presenting "Viz-I

NVIDIA Research Projects 66 Jan 01, 2023
“英特尔创新大师杯”深度学习挑战赛 赛道3:CCKS2021中文NLP地址相关性任务

ccks2021-track3 CCKS2021中文NLP地址相关性任务-赛道三-冠军方案 团队:我的加菲鱼- wodejiafeiyu 初赛第二/复赛第一/决赛第一 前言 19年开始,陆陆续续参加了一些比赛,拿到过一些top,比较懒一直都没分享过,这次比较幸运又拿了top1,打算分享下 分类的任务

shaochenjie 131 Dec 31, 2022
Implementation of CaiT models in TensorFlow and ImageNet-1k checkpoints. Includes code for inference and fine-tuning.

CaiT-TF (Going deeper with Image Transformers) This repository provides TensorFlow / Keras implementations of different CaiT [1] variants from Touvron

Sayak Paul 9 Jun 26, 2022
Code for "Long Range Probabilistic Forecasting in Time-Series using High Order Statistics"

Long Range Probabilistic Forecasting in Time-Series using High Order Statistics This is the code produced as part of the paper Long Range Probabilisti

16 Dec 06, 2022
PyTorch implementation of PP-LCNet

PP-LCNet-Pytorch Pre-Trained Models Google Drive p018 Accuracy Models Top1 Top5 PPLCNet_x0_25 0.5186 0.7565 PPLCNet_x0_35 0.5809 0.8083 PPLCNet_x0_5 0

24 Dec 12, 2022
Code release of paper "Deep Multi-View Stereo gone wild"

Deep MVS gone wild Pytorch implementation of "Deep MVS gone wild" (Paper | website) This repository provides the code to reproduce the experiments of

François Darmon 53 Dec 24, 2022
[ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Chenyu You, Xiaohui Xie, Zhangyang Wang

Undistillable: Making A Nasty Teacher That CANNOT teach students "Undistillable: Making A Nasty Teacher That CANNOT teach students" Haoyu Ma, Tianlong

VITA 71 Dec 28, 2022
Learning kernels to maximize the power of MMD tests

Code for the paper "Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy" (arXiv:1611.04488; published at ICLR 2017), by Douga

Danica J. Sutherland 201 Dec 17, 2022
The Multi-Mission Maximum Likelihood framework (3ML)

PyPi Conda The Multi-Mission Maximum Likelihood framework (3ML) A framework for multi-wavelength/multi-messenger analysis for astronomy/astrophysics.

The Multi-Mission Maximum Likelihood (3ML) 62 Dec 30, 2022
The open source code of SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation.

SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation(ICPR 2020) Overview This code is for the paper: Spatial Attention U-Net for Retinal V

Changlu Guo 151 Dec 28, 2022
PyJokes - Joking around with Python library pyjokes

Hi, it's Muhaimin again 👋 This is something unorthodox but cool. Don't forget t

Muhaimin A. Salay Kanton 1 Feb 02, 2022
Highly comparative time-series analysis

〰️ hctsa 〰️ : highly comparative time-series analysis hctsa is a software package for running highly comparative time-series analysis using Matlab (fu

Ben Fulcher 569 Dec 21, 2022
This is the repository for the NeurIPS-21 paper [Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels].

CGPN This is the repository for the NeurIPS-21 paper [Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels]. Req

10 Sep 12, 2022
Teaches a student network from the knowledge obtained via training of a larger teacher network

Distilling-the-knowledge-in-neural-network Teaches a student network from the knowledge obtained via training of a larger teacher network This is an i

Abhishek Sinha 146 Dec 11, 2022
EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks

EncT5 (Unofficial) Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks About Finetune T5 model for classification & r

Jangwon Park 34 Jan 01, 2023
Fibonacci Method Gradient Descent

An implementation of the Fibonacci method for gradient descent, featuring a TKinter GUI for inputting the function / parameters to be examined and a matplotlib plot of the function and results.

Emma 1 Jan 28, 2022
Some methods for comparing network representations in deep learning and neuroscience.

Generalized Shape Metrics on Neural Representations In neuroscience and in deep learning, quantifying the (dis)similarity of neural representations ac

Alex Williams 45 Dec 27, 2022
LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021

LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021 We propose a cross encoder model (LTR_CrossEncoder) for information retrieval, re-retrie

Hieu Duong 7 Jan 12, 2022
YolactEdge: Real-time Instance Segmentation on the Edge

YolactEdge, the first competitive instance segmentation approach that runs on small edge devices at real-time speeds. Specifically, YolactEdge runs at up to 30.8 FPS on a Jetson AGX Xavier (and 172.7

Haotian Liu 1.1k Jan 06, 2023
Density-aware Single Image De-raining using a Multi-stream Dense Network (CVPR 2018)

DID-MDN Density-aware Single Image De-raining using a Multi-stream Dense Network He Zhang, Vishal M. Patel [Paper Link] (CVPR'18) We present a novel d

He Zhang 224 Dec 12, 2022