Personal implementation of paper "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval"

Overview

Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval

This repo provides personal implementation of paper Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval in a simplified way. The code is refered to official version of ANCE.

Environment

'transformers==2.3.0' 
'pytrec-eval'
'faiss-cpu'
'wget'
'python==3.6.*'

Data Download & Preprocessing

To download all the needed data, run:

bash commands/data_download.sh 

Data Preprocessing

The command to preprocess passage and document data is listed below:

python data/msmarco_data.py 
--data_dir $raw_data_dir \
--out_data_dir $preprocessed_data_dir \ 
--model_type {use rdot_nll for ANCE FirstP, rdot_nll_multi_chunk for ANCE MaxP} \ 
--model_name_or_path roberta-base \ 
--max_seq_length {use 512 for ANCE FirstP, 2048 for ANCE MaxP} \ 
--data_type {use 1 for passage, 0 for document}

The data preprocessing command is included as the first step in the training command file commands/run_train.sh

Warmup for Training

ANCE training starts from a pretrained BM25 warmup checkpoint. The command with our used parameters to train this warmup checkpoint is in commands/run_train_warmup.py and is shown below:

    python3 -m torch.distributed.launch --nproc_per_node=1 ../drivers/run_warmup.py \
    --train_model_type rdot_nll \
    --model_name_or_path roberta-base \
    --task_name MSMarco \
    --do_train \
    --evaluate_during_training \
    --data_dir ${location of your raw data}  
    --max_seq_length 128 
    --per_gpu_eval_batch_size=256 \
    --per_gpu_train_batch_size=32 \
    --learning_rate 2e-4  \
    --logging_steps 100   \
    --num_train_epochs 2.0  \
    --output_dir ${location for checkpoint saving} \
    --warmup_steps 1000  \
    --overwrite_output_dir \
    --save_steps 30000 \
    --gradient_accumulation_steps 1 \
    --expected_train_size 35000000 \
    --logging_steps_per_eval 1 \
    --fp16 \
    --optimizer lamb \
    --log_dir ~/tensorboard/${DLWS_JOB_ID}/logs/OSpass

Training

To train the model(s) in the paper, you need to start two commands in the following order:

  1. run commands/run_train.sh which does three things in a sequence:

    a. Data preprocessing: this is explained in the previous data preprocessing section. This step will check if the preprocess data folder exists, and will be skipped if the checking is positive.

    b. Initial ANN data generation: this step will use the pretrained BM25 warmup checkpoint to generate the initial training data. The command is as follow:

     python -m torch.distributed.launch --nproc_per_node=$gpu_no ../drivers/run_ann_data_gen.py 
     --training_dir {# checkpoint location, not used for initial data generation} \ 
     --init_model_dir {pretrained BM25 warmup checkpoint location} \ 
     --model_type rdot_nll \
     --output_dir $model_ann_data_dir \
     --cache_dir $model_ann_data_dir_cache \
     --data_dir $preprocessed_data_dir \
     --max_seq_length 512 \
     --per_gpu_eval_batch_size 16 \
     --topk_training {top k candidates for ANN search(ie:200)} \ 
     --negative_sample {negative samples per query(20)} \ 
     --end_output_num 0 # only set as 0 for initial data generation, do not set this otherwise
    

    c. Training: ANCE training with the most recently generated ANN data, the command is as follow:

     python -m torch.distributed.launch --nproc_per_node=$gpu_no ../drivers/run_ann.py 
     --model_type rdot_nll \
     --model_name_or_path $pretrained_checkpoint_dir \
     --task_name MSMarco \
     --triplet {# default = False, action="store_true", help="Whether to run training}\ 
     --data_dir $preprocessed_data_dir \
     --ann_dir {location of the ANN generated training data} \ 
     --max_seq_length 512 \
     --per_gpu_train_batch_size=8 \
     --gradient_accumulation_steps 2 \
     --learning_rate 1e-6 \
     --output_dir $model_dir \
     --warmup_steps 5000 \
     --logging_steps 100 \
     --save_steps 10000 \
     --optimizer lamb 
    
  2. Once training starts, start another job in parallel to fetch the latest checkpoint from the ongoing training and update the training data. To do that, run

     bash commands/run_ann_data_gen.sh
    

    The command is similar to the initial ANN data generation command explained previously

Inference

The command for inferencing query and passage/doc embeddings is the same as that for Initial ANN data generation described above as the first step in ANN data generation is inference. However you need to add --inference to the command to have the program to stop after the initial inference step. commands/run_inference.sh provides a sample command.

Evaluation

The evaluation is done through "Calculate Metrics.ipynb". This notebook calculates full ranking and reranking metrics used in the paper including NDCG, MRR, hole rate, recall for passage/document, dev/eval set specified by user. In order to run it, you need to define the following parameters at the beginning of the Jupyter notebook.

    checkpoint_path = {location for dumpped query and passage/document embeddings which is output_dir from run_ann_data_gen.py}
    checkpoint =  {embedding from which checkpoint(ie: 200000)}
    data_type =  {0 for document, 1 for passage}
    test_set =  {0 for MSMARCO dev_set, 1 for TREC eval_set}
    raw_data_dir = 
    processed_data_dir = 

ANCE VS DPR on OpenQA Benchmarks

We also evaluate ANCE on the OpenQA benchmark used in a parallel work (DPR). At the time of our experiment, only the pre-processed NQ and TriviaQA data are released. Our experiments use the two released tasks and inherit DPR retriever evaluation. The evaluation uses the [email protected]/100 which is whether the Top-20/100 retrieved passages include the answer. We explain the steps to reproduce our results on OpenQA Benchmarks in this section.

Download data

commands/data_download.sh takes care of this step.

ANN data generation & ANCE training

Following the same training philosophy discussed before, the ann data generation and ANCE training for OpenQA require two parallel jobs.

  1. We need to preprocess data and generate an initial training set for ANCE to start training. The command for that is provided in:
commands/run_ann_data_gen_dpr.sh

We keep this data generation job running after it creates an initial training set as it will later keep generating training data with newest checkpoints from the training process.

  1. After an initial training set is generated, we start an ANCE training job with commands provided in:
commands/run_train_dpr.sh

During training, the evaluation metrics will be printed to tensorboards each time it receives new training data. Alternatively, you could check the metrics in the dumped file "ann_ndcg_#" in the directory specified by "model_ann_data_dir" in commands/run_ann_data_gen_dpr.sh each time new training data is generated.

Results

The run_train.sh and run_ann_data_gen.sh files contain the command with the parameters we used for passage ANCE(FirstP), document ANCE(FirstP) and document ANCE(MaxP) Our model achieves the following performance on MSMARCO dev set and TREC eval set :

MSMARCO Dev Passage Retrieval [email protected] [email protected] Steps
ANCE(FirstP) 0.330 0.959 600K
ANCE(MaxP) - - -
TREC DL Passage [email protected] Rerank Retrieval Steps
ANCE(FirstP) 0.677 0.648 600K
ANCE(MaxP) - - -
TREC DL Document [email protected] Rerank Retrieval Steps
ANCE(FirstP) 0.641 0.615 210K
ANCE(MaxP) 0.671 0.628 139K
MSMARCO Dev Passage Retrieval [email protected] Steps
pretrained BM25 warmup checkpoint 0.311 60K
ANCE Single-task Training Top-20 Top-100 Steps
NQ 81.9 87.5 136K
TriviaQA 80.3 85.3 100K
ANCE Multi-task Training Top-20 Top-100 Steps
NQ 82.1 87.9 300K
TriviaQA 80.3 85.2 300K

Click the steps in the table to download the corresponding checkpoints.

Our result for document ANCE(FirstP) TREC eval set top 100 retrieved document per query could be downloaded here. Our result for document ANCE(MaxP) TREC eval set top 100 retrieved document per query could be downloaded here.

The TREC eval set query embedding and their ids for our passage ANCE(FirstP) experiment could be downloaded here. The TREC eval set query embedding and their ids for our document ANCE(FirstP) experiment could be downloaded here. The TREC eval set query embedding and their ids for our document 2048 ANCE(MaxP) experiment could be downloaded here.

The t-SNE plots for all the queries in the TREC document eval set for ANCE(FirstP) could be viewed here.

run_train.sh and run_ann_data_gen.sh files contain the commands with the parameters we used for passage ANCE(FirstP), document ANCE(FirstP) and document 2048 ANCE(MaxP) to reproduce the results in this section. run_train_warmup.sh contains the commands to reproduce the results for the pretrained BM25 warmup checkpoint in this section

Note the steps to reproduce similar results as shown in the table might be a little different due to different synchronizing between training and ann data generation processes and other possible environment differences of the user experiments.

Owner
John
My research interests are machine learning and recommender systems.
John
SegNet model implemented using keras framework

keras-segnet Implementation of SegNet-like architecture using keras. Current version doesn't support index transferring proposed in SegNet article, so

185 Aug 30, 2022
JupyterNotebook - C/C++, Javascript, HTML, LaTex, Shell scripts in Jupyter Notebook Also run them on remote computer

JupyterNotebook Read, write and execute C, C++, Javascript, Shell scripts, HTML, LaTex in jupyter notebook, And also execute them on remote computer R

1 Jan 09, 2022
PyAF is an Open Source Python library for Automatic Time Series Forecasting built on top of popular pydata modules.

PyAF (Python Automatic Forecasting) PyAF is an Open Source Python library for Automatic Forecasting built on top of popular data science python module

CARME Antoine 405 Jan 02, 2023
The-Secret-Sharing-Schemes - This interactive script demonstrates the Secret Sharing Schemes algorithm

The-Secret-Sharing-Schemes This interactive script demonstrates the Secret Shari

Nishaant Goswamy 1 Jan 02, 2022
Human Pose Detection on EdgeTPU

Coral PoseNet Pose estimation refers to computer vision techniques that detect human figures in images and video, so that one could determine, for exa

google-coral 476 Dec 31, 2022
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 05, 2022
A Tensorflow implementation of CapsNet based on Geoffrey Hinton's paper Dynamic Routing Between Capsules

CapsNet-Tensorflow A Tensorflow implementation of CapsNet based on Geoffrey Hinton's paper Dynamic Routing Between Capsules Notes: The current version

Huadong Liao 3.8k Dec 29, 2022
The code for paper "Learning Implicit Fields for Generative Shape Modeling".

implicit-decoder The tensorflow code for paper "Learning Implicit Fields for Generative Shape Modeling", Zhiqin Chen, Hao (Richard) Zhang. Project pag

Zhiqin Chen 353 Dec 30, 2022
Predicting Axillary Lymph Node Metastasis in Early Breast Cancer Using Deep Learning on Primary Tumor Biopsy Slides

Predicting Axillary Lymph Node Metastasis in Early Breast Cancer Using Deep Learning on Primary Tumor Biopsy Slides Project | This repo is the officia

CVSM Group - email: <a href=[email protected]"> 33 Dec 28, 2022
基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

37 Jan 01, 2023
Underwater industrial application yolov5m6

This project wins the intelligent algorithm contest finalist award and stands out from over 2000teams in China Underwater Robot Professional Contest, entering the final of China Underwater Robot Prof

8 Nov 09, 2022
The story of Chicken for Club Bing

Chicken Story tl;dr: The time when Microsoft banned my entire country for cheating at Club Bing. (A lot of the details are from memory so I've recreat

Eyal 142 May 16, 2022
Temporal-Relational CrossTransformers

Temporal-Relational Cross-Transformers (TRX) This repo contains code for the method introduced in the paper: Temporal-Relational CrossTransformers for

83 Dec 12, 2022
It is a system used to detect bone fractures. using techniques deep learning and image processing

MohammedHussiengadalla-Intelligent-Classification-System-for-Bone-Fractures It is a system used to detect bone fractures. using techniques deep learni

Mohammed Hussien 7 Nov 11, 2022
Train CNNs for the fruits360 data set in NTOU CS「Machine Vision」class.

CNNs fruits360 Train CNNs for the fruits360 data set in NTOU CS「Machine Vision」class. CNN on a pretrained model Build a CNN on a pretrained model, Res

Ricky Chuang 1 Mar 07, 2022
Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation

Official Code Implementation of The Paper : XAI for Transformers: Better Explanations through Conservative Propagation For the SST-2 and IMDB expermin

Ameen Ali 23 Dec 30, 2022
“英特尔创新大师杯”深度学习挑战赛 赛道3:CCKS2021中文NLP地址相关性任务

ccks2021-track3 CCKS2021中文NLP地址相关性任务-赛道三-冠军方案 团队:我的加菲鱼- wodejiafeiyu 初赛第二/复赛第一/决赛第一 前言 19年开始,陆陆续续参加了一些比赛,拿到过一些top,比较懒一直都没分享过,这次比较幸运又拿了top1,打算分享下 分类的任务

shaochenjie 131 Dec 31, 2022
Transfer SemanticKITTI labeles into other dataset/sensor formats.

LiDAR-Transfer Transfer SemanticKITTI labeles into other dataset/sensor formats. Content Convert datasets (NUSCENES, FORD, NCLT) to KITTI format Minim

Photogrammetry & Robotics Bonn 64 Nov 21, 2022
Language model Prompt And Query Archive

LPAQA: Language model Prompt And Query Archive This repository contains data and code for the paper How Can We Know What Language Models Know? Install

127 Dec 20, 2022
StarGAN v2-Tensorflow - Simple Tensorflow implementation of StarGAN v2

Official Tensorflow implementation Open ! - Clova AI StarGAN v2 — Un-official TensorFlow Implementation [Paper] [Pytorch] : Diverse Image Synthesis f

Junho Kim 110 Jul 02, 2022