Unbiased Learning To Rank Algorithms (ULTRA)

Overview
logo

Unbiased Learning to Rank Algorithms (ULTRA)

Python 3.6 Documentation Status Build Status codecov License follow on Twitter

🔥 News: A TensorFlow version of this package can be found in ULTRA.

This is an Unbiased Learning To Rank Algorithms (ULTRA) toolbox, which provides a codebase for experiments and research on learning to rank with human annotated or noisy labels. With the unified data processing pipeline, ULTRA supports multiple unbiased learning-to-rank algorithms, online learning-to-rank algorithms, neural learning-to-rank models, as well as different methods to use and simulate noisy labels (e.g., clicks) to train and test different algorithms/ranking models. A user-friendly documentation can be found here.

Get Started

Create virtual environment (optional):

pip install --user virtualenv
~/.local/bin/virtualenv -p python3 ./venv
source venv/bin/activate

Install ULTRA from the source:

git clone https://github.com/ULTR-Community/ULTRA_pytorch.git
cd ULTRA
make init

Run toy example:

bash example/toy/offline_exp_pipeline.sh

Structure

structure

Input Layers

  1. ClickSimulationFeed: this is the input layer that generate synthetic clicks on fixed ranked lists to feed the learning algorithm.

  2. DeterministicOnlineSimulationFeed: this is the input layer that first create ranked lists by sorting documents according to the current ranking model, and then generate synthetic clicks on the lists to feed the learning algorithm. It can do result interleaving if required by the learning algorithm.

  3. StochasticOnlineSimulationFeed: this is the input layer that first create ranked lists by sampling documents based on their scores in the current ranking model and the Plackett-Luce distribution, and then generate synthetic clicks on the lists to feed the learning algorithm. It can do result interleaving if required by the learning algorithm.

  4. DirectLabelFeed: this is the input layer that directly feed the true relevance labels of each documents to the learning algorithm.

Learning Algorithms

  1. NA: this model is an implementation of the naive algorithm that directly train models with input labels (e.g., clicks).

  2. DLA: this is an implementation of the Dual Learning Algorithm in Unbiased Learning to Rank with Unbiased Propensity Estimation.

  3. IPW: this model is an implementation of the Inverse Propensity Weighting algorithms in Learning to Rank with Selection Bias in Personal Search and Unbiased Learning-to-Rank with Biased Feedback

  4. REM: this model is an implementation of the regression-based EM algorithm in Position bias estimation for unbiased learning to rank in personal search

  5. PD: this model is an implementation of the pairwise debiasing algorithm in Unbiased LambdaMART: An Unbiased Pairwise Learning-to-Rank Algorithm.

  6. DBGD: this model is an implementation of the Dual Bandit Gradient Descent algorithm in Interactively optimizing information retrieval systems as a dueling bandits problem

  7. MGD: this model is an implementation of the Multileave Gradient Descent in Multileave Gradient Descent for Fast Online Learning to Rank

  8. NSGD: this model is an implementation of the Null Space Gradient Descent algorithm in Efficient Exploration of Gradient Space for Online Learning to Rank

  9. PDGD: this model is an implementation of the Pairwise Differentiable Gradient Descent algorithm in Differentiable unbiased online learning to rank

Ranking Models

  1. Linear: this is a linear ranking algorithm that compute ranking scores with a linear function.

  2. DNN: this is neural ranking algorithm that compute ranking scores with a multi-layer perceptron network (with non-linear activation functions).

  3. DLCM: this is an implementation of the Deep Listwise Context Model in Learning a Deep Listwise Context Model for Ranking Refinement (TODO).

  4. GSF: this is an implementation of the Groupwise Scoring Function in Learning Groupwise Multivariate Scoring Functions Using Deep Neural Networks (TODO).

  5. SetRank: this is an implementation of the SetRank model in SetRank: Learning a Permutation-Invariant Ranking Model for Information Retrieval (TODO).

Supported Evaluation Metrics

  1. MRR: the Mean Reciprocal Rank.

  2. ERR: the Expected Reciprocal Rank from Expected reciprocal rank for graded relevance.

  3. ARP: the Average Relevance Position.

  4. NDCG: the Normalized Discounted Cumulative Gain.

  5. DCG: the Discounted Cumulative Gain.

  6. Precision: the Precision.

  7. MAP: the Mean Average Precision.

  8. Ordered_Pair_Accuracy: the percentage of correctedly ordered pair.

Click Simulation Example

Create click models for click simulations

python ultra/utils/click_models.py pbm 0.1 1 4 1.0 example/ClickModel

* The output is a json file containing the click mode that could be used for click simulation. More details could be found in the code.

(Optional) Estimate examination propensity with result randomization

python ultra/utils/propensity_estimator.py example/ClickModel/pbm_0.1_1.0_4_1.0.json 
   
     example/PropensityEstimator/

   

* The output is a json file containing the estimated examination propensity (used for IPW). DATA_DIR is the directory for the prepared data created by ./libsvm_tools/prepare_exp_data_with_svmrank.py. More details could be found in the code.

Citation

If you use ULTRA in your research, please use the following BibTex entry.

@misc{tran2021ultra,
      title={ULTRA: An Unbiased Learning To Rank Algorithm Toolbox}, 
      author={Anh Tran and Tao Yang and Qingyao Ai},
      year={2021},
      eprint={2108.05073},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}

@article{10.1145/3439861,
author = {Ai, Qingyao and Yang, Tao and Wang, Huazheng and Mao, Jiaxin},
title = {Unbiased Learning to Rank: Online or Offline?},
year = {2021},
issue_date = {February 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {39},
number = {2},
issn = {1046-8188},
url = {https://doi.org/10.1145/3439861},
doi = {10.1145/3439861},
journal = {ACM Trans. Inf. Syst.},
month = feb,
articleno = {21},
numpages = {29},
keywords = {unbiased learning, online learning, Learning to rank}
}

Development Team

​ ​ ​ ​

QingyaoAi
Qingyao Ai

Core Dev
ASST PROF, Univ. of Utah

anhtran1010
Anh Tran

Core Dev
Ph.D., Univ. of Utah

Taosheng-ty
Tao Yang

Core Dev
Ph.D., Univ. of Utah

huazhengwang
Huazheng Wang

Core Dev
Ph.D., Univ. of Virginia

defaultstr
Jiaxin Mao

Core Dev
ASST PROF, Renmin Univ.

Contribution

Please read the Contributing Guide before creating a pull request.

Project Organizers

  • Qingyao Ai
    • School of Computing, University of Utah
    • Homepage

License

Apache-2.0

Copyright (c) 2020-present, Qingyao Ai (QingyaoAi) "# Pytorch_ULTRA"

Owner
Facilitating the design, comparison and sharing of unbiased and online learning to rank algorithms.
Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready inference.

Yolov5 running on TorchServe (GPU compatible) ! This is a dockerfile to run TorchServe for Yolo v5 object detection model. (TorchServe (PyTorch librar

82 Nov 29, 2022
This code is an unofficial implementation of HiFiSinger.

HiFiSinger This code is an unofficial implementation of HiFiSinger. The algorithm is based on the following papers: Chen, J., Tan, X., Luan, J., Qin,

Heejo You 87 Dec 23, 2022
Cross-Task Consistency Learning Framework for Multi-Task Learning

Cross-Task Consistency Learning Framework for Multi-Task Learning Tested on numpy(v1.19.1) opencv-python(v4.4.0.42) torch(v1.7.0) torchvision(v0.8.0)

Aki Nakano 2 Jan 08, 2022
Learning Synthetic Environments and Reward Networks for Reinforcement Learning

Learning Synthetic Environments and Reward Networks for Reinforcement Learning We explore meta-learning agent-agnostic neural Synthetic Environments (

AutoML-Freiburg-Hannover 16 Sep 02, 2022
Code for Towards Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games

Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games How to run our algorithm? Create the new environment using: conda

MARL @ SJTU 8 Dec 27, 2022
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Karate Club is an unsupervised machine learning extension library for NetworkX. Please look at the Documentation, relevant Paper, Promo Video, and Ext

Benedek Rozemberczki 1.8k Jan 07, 2023
Library to enable Bayesian active learning in your research or labeling work.

Bayesian Active Learning (BaaL) BaaL is an active learning library developed at ElementAI. This repository contains techniques and reusable components

ElementAI 687 Dec 25, 2022
Official Implementation of CVPR 2022 paper: "Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning"

(CVPR 2022) Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning ArXiv This repo contains Official Implementat

Yujun Shi 24 Nov 01, 2022
A flexible ML framework built to simplify medical image reconstruction and analysis experimentation.

meddlr Getting Started Meddlr is a config-driven ML framework built to simplify medical image reconstruction and analysis problems. Installation To av

Arjun Desai 36 Dec 16, 2022
A repo with study material, exercises, examples, etc for Devnet SPAUTO

MPLS in the SDN Era -- DevNet SPAUTO Get right to the study material: Checkout the Wiki! A lab topology based on MPLS in the SDN era book used for 30

Hugo Tinoco 67 Nov 16, 2022
Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

TVT Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation Datasets: Digit: MNIST, SVHN, USPS Object: Office, Office-Home, Vi

37 Dec 15, 2022
本步态识别系统主要基于GaitSet模型进行实现

本步态识别系统主要基于GaitSet模型进行实现。在尝试部署本系统之前,建立理解GaitSet模型的网络结构、训练和推理方法。 系统的实现效果如视频所示: 演示视频 由于模型较大,部分模型文件存储在百度云盘。 链接提取码:33mb 具体部署过程 1.下载代码 2.安装requirements.txt

16 Oct 22, 2022
PyTorch Implementation of the paper Learning to Reweight Examples for Robust Deep Learning

Learning to Reweight Examples for Robust Deep Learning Unofficial PyTorch implementation of Learning to Reweight Examples for Robust Deep Learning. Th

Daniel Stanley Tan 325 Dec 28, 2022
Colab notebook for openai/glide-text2im.

GLIDE text2im on Colab This repository provides a Colab notebook to produce images conditioned on text prompts with GLIDE [1]. Usage Run text2im.ipynb

Wok 19 Oct 19, 2022
DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在tensorflow2当中的实现

DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在tensorflow2当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download

Bubbliiiing 31 Nov 25, 2022
Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation

Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation This paper has been accepted and early accessed

Yun Liu 39 Sep 20, 2022
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

30 Dec 24, 2022
A general and strong 3D object detection codebase that supports more methods, datasets and tools (debugging, recording and analysis).

ALLINONE-Det ALLINONE-Det is a general and strong 3D object detection codebase built on OpenPCDet, which supports more methods, datasets and tools (de

Michael.CV 5 Nov 03, 2022
Turi Create simplifies the development of custom machine learning models.

Quick Links: Installation | Documentation | WWDC 2019 | WWDC 2018 Turi Create Check out our talks at WWDC 2019 and at WWDC 2018! Turi Create simplifie

Apple 10.9k Jan 01, 2023
Implementation of "Selection via Proxy: Efficient Data Selection for Deep Learning" from ICLR 2020.

Selection via Proxy: Efficient Data Selection for Deep Learning This repository contains a refactored implementation of "Selection via Proxy: Efficien

Stanford Future Data Systems 70 Nov 16, 2022