reXmeX is recommender system evaluation metric library.

Overview

Version License repo size build badge


reXmeX is recommender system evaluation metric library.

Please look at the Documentation and External Resources.

reXmeX consists of utilities for recommender system evaluation. First, it provides a comprehensive collection of metrics for the evaluation of recommender systems. Second, it includes a variety of methods for reporting and plotting the performance results. Implemented metrics cover a range of well-known metrics and newly proposed metrics from data mining (ICDM, CIKM, KDD) conferences and pieces from prominent journals.


An introductory example

The following example loads a synthetic dataset which has the source_id, target_id, source_group and target group keys besides the mandatory y_true and y_scores. The dataset has binary labels and predictied probability scores. We read the dataset and define a defult ClassificationMetric instance for the evaluation of the predictions. Using this metric set we create a score card, group the predictions on with the source_group key and return a performance metric report.

from rexmex.scorecard import ScoreCard
from rexmex.dataset import DatasetReader
from rexmex.metricset import ClassificationMetricSet

reader = DatasetReader()
scores = reader.read_dataset()

metric_set = ClassificationMetricSet()

score_card = ScoreCard(metric_set)

report = score_card.generate_report(scores, groupping=["source_group"])

Scorecard


A rexmex score card allows the reporting of recommender system performance metrics, plotting the performance metrics and saving those

Metric Sets

Metric sets allow the users to calculate a range of evaluation metrics for a label - predicted label vector pair. We provide a general MetricSet class and specialized metric sets with pre-set metrics have the following general categories:

  • Rating
  • Classification
  • Ranking
  • Coverage

Rating Metric Set

These metrics assume that items are scored explicitly and ratings are predicted by a regression model.

Expand to see all rating metrics in the metric set.

Classification Metric Set

These metrics assume that the items are scored with raw probabilities (these can be binarized).

Expand to see all classification metrics in the metric set.

Ranking Metric Set

Expand to see all ranking metrics in the metric set.

Coverage Metric Set

These merics measure how well the recommender system covers the available items in the catalog. In other words measure the diversity of predictions.


Documentation and Reporting Issues

Head over to our documentation to find out more about installation and data handling, a full list of implemented methods, and datasets. For a quick start, check out our examples.

If you notice anything unexpected, please open an issue and let us know. If you are missing a specific method, feel free to open a feature request. We are motivated to constantly make RexMex even better.


Installation via the command line

RexMex can be installed with the following command after the repo is cloned.

$ python setup.py install

Installation via pip

RexMex can be installed with the following pip command.

$ pip install rexmex

As we create new releases frequently, upgrading the package casually might be beneficial.

$ pip install rexmex --upgrade

Running tests

$ pytest ./tests/unit -cov rexmex/
$ pytest ./tests/integration -cov rexmex/

License

Comments
  • Add static type checking with MyPy

    Add static type checking with MyPy

    Summary

    This PR adds optional static type checking using mypy, which uses the type hints to check for errors that don't show up from unit tests.

    • [x] Code passes all tests
    • [x] Unit tests provided for these changes
    • [x] Documentation and docstrings added for these changes

    Changes

    1. Add mypy environment to tox.ini
    2. Add call to mypy via tox in the GHA configuration for CI
    3. Make a few improvements to code based on suggestions from type checker
    4. Add additional tests
    opened by cthoyt 6
  • Annotate redundant functions

    Annotate redundant functions

    Summary

    As a follow-up to #33, this PR adds a duplicate_of annotation to the rexmex.utils.Annotator class and begins annotating which functions are duplicate of each other.

    ~Caveat I'm not really happy with the direction of which is the "duplicate" in many cases, e.g., where miss_rate is the "canonical" one and "false_negative_rate" is the duplicate~

    • [x] Code passes all tests
    • [x] Unit tests provided for these changes
    • [x] Documentation and docstrings added for these changes
    • [x] Check all functions are annotated, and correctly

    Changes

    • Add new annotation duplicate_of to rexmex.utils.Annotator
    • Annotate duplicate functions (e.g., precision_score is a duplicate of positive_predictive_value)
    • Pins pandas<=1.3.5 since they just put the 1.4 release candidate up on PyPI this morning and it doesn't work with the conda env in the GHA workflow
    opened by cthoyt 5
  • Why are there duplicate functions?

    Why are there duplicate functions?

    I noticed that there are duplicate functions such as miss_rate()/false_negative_rate() and fall_out()/false_positive_rate(). What's the reason for the duplication?

    opened by cthoyt 3
  • Remove deprecated sklearn dependency

    Remove deprecated sklearn dependency

    Summary

    Fixes #56 by removing deprecated sklearn dependency.

    • [x] Code passes all tests
    • [x] Unit tests provided for these changes (does not apply)
    • [x] Documentation and docstrings added for these changes (does not apply)

    Changes

    • Remove deprecated sklearn dependency
    opened by dobraczka 2
  • Pandas versioning issue

    Pandas versioning issue

    Hey rexmex team,

    I wanted to ask you if there is a specific reason why your installation requires pandas to be of this version?

    Since this package gets installed with the latest version of PyKEEN, I am facing some versioning issues due to the Pandas version restriction of your package.

    The related line of code: https://github.com/AstraZeneca/rexmex/blob/44f453ff20e92569270b9e1cfcb75b44b7839128/setup.py#L3

    Apologies if it's a silly question but: is it indeed a strict requirement for rexmex to have Pandas<1.3.5 or is this something we can modify in rexmex setup.py? ( if so I can open a related PR for it )

    Thank you in advance! Best, Dimitris

    opened by DimitrisAlivas 2
  • Coverage refactor, added CoverageMetricSet and CoverageScoreCard

    Coverage refactor, added CoverageMetricSet and CoverageScoreCard

    Summary

    Please provide a high-level summary of the changes for the changes and notes for the reviewers

    • [X] Code passes all tests
    • [X] Unit tests provided for these changes
    • [X] Documentation and docstrings added for these changes

    Changes

    • changed signature for the Coverage metrics, it now requires supplying the relevant user and item spaces plus a list of tuples (user, item) as final predictions
    • added the item coverage and user coverage metrics to CoverageMetricSet
    • created a ScoreCard for coverage metrics (they need a different signature than classification and regression)
    opened by kajocina 2
  • Add binarize annotation

    Add binarize annotation

    Summary

    Similarly to #35, this PR adds an additional annotation for functions that need to be binarized.

    • [x] Code passes all tests
    • [x] Unit tests provided for these changes
    • [x] Documentation and docstrings added for these changes

    Changes

    • Add additional keyword argument binarize to rexmex.utils.Annotator.annotate. This has a default of False, since most functions do not need to be binarized.
    • Annotate binarize=True on the functions that were binarized in the rexmex.metricset.ClassificationMetricSet.__init__

    Future outlook

    This improvement will enable the later implementation of automated collection and processing of metric functions to improve the rexmex.metricset.ClassificationMetricSet class.

    opened by cthoyt 2
  • Add additional classification function annotations

    Add additional classification function annotations

    Summary

    Closes #28

    This PR adds four new annotations to classification functions:

    • lower bound inclusive
    • upper bound inclusive
    • a description
    • a URL link to more information / citation

    Checks:

    • [x] Code passes all tests
    • [x] Unit tests provided for these changes
    • [x] Documentation and docstrings added for these changes

    Changes

    This PR adds new annotation requirements and applies them to all classification functions

    opened by cthoyt 2
  • Add basic coverage stat

    Add basic coverage stat

    Summary

    Added a basic coverage statistic + some tests. The naming convention might need to be refactored though, not sure if item_coverage() is the right name (or perhaps catalogue_coverage() instead?)

    • [X] Code passes all tests
    • [X] Unit tests provided for these changes
    • [X] Documentation and docstrings added for these changes

    Changes

    • added new metric (coverage) which check how many of the possible objects/items get recommended at least once, expressed as a fraction (if 1 out of 5 items never got recommended, it will be 80%)
    opened by kajocina 2
  • Demonstrate annotating structured metadata to classification functions

    Demonstrate annotating structured metadata to classification functions

    Summary

    This PR demonstrates a potential solution to #28 in a pilot applied to classification metrics. It could be extended in a future pull request to other branches of the package.

    • [x] Code passes all tests
    • [x] Unit tests provided for these changes
    • [x] Documentation and docstrings added for these changes

    Changes

    • [x] Add an annotate decorator that adds some structured information to classification functions
    • [x] Add tests to make sure the data is accessible
    • [x] Add unit test to ensure all classification functions are annotated (1afd68b)
    • [x] Annotate all classification functions

    Future

    Before finalizing this PR, I had also used the annotate function to make a registry of functions. This could be used to make the generation/maintenance of the ClassificationMetricSet much easier, but I'd save that for a different PR.

    opened by cthoyt 2
  • API Suggestions

    API Suggestions

    Right now it's a bit round-about to get a scorecard for a given dataset since it expect a pandas format. I'd suggest exposing ScoreCard._get_performance_metrics as a public user interface and also encourage people to use that directly in case they're generating their own y_true and y_score and don't want to write their own code to generate a pandas dataframe from it, just for rexmex to need to unpack it.

    My example is in PyKEEN, where we do just that: https://github.com/pykeen/pykeen/blob/799e224e772176703d796a9247bfcc179d343c6c/src/pykeen/evaluation/sklearn.py#L129-L142

    I'd also say it would be worth adding a second introductory example based on this in the README.

    opened by cthoyt 2
  • Annotate rankings (help wanted)

    Annotate rankings (help wanted)

    Summary

    This PR uses the rexmex.utils.Annotator to annotate information about ranking metrics (e.g., MR, MRR, Hits @ K). I'm not familiar with all of the rankings, so help would be great on this one. This would especially be good for first-time contributors, since a lot of it is busy work of looking up metrics, finding out about their properties, etc.. A potential contributor could make a branch off of mine and then either PR it directly, or PR it into my fork (or just post the curation as a comment in this PR, and I can make the code updates while crediting them as a co-author on the relevant commits)

    • [ ] Code passes all tests
    • [ ] Unit tests provided for these changes
    • [ ] Documentation and docstrings added for these changes
    • [ ] https://github.com/AstraZeneca/rexmex/pull/43, since it would be good to re-use its generalized testing framework

    Changes

    • [x] Add annotator to rexmex.metrics.ratings and annotate its functions
    • [ ] Switch construction of metric set to use the annotator's registry
    opened by cthoyt 0
  • Improve binning in `binarize()`

    Improve binning in `binarize()`

    The current binarize function uses a cutoff of 0.5 for binarization: https://github.com/AstraZeneca/rexmex/blob/3e266529761281ae832e49736e48d3e46f3b4af4/rexmex/utils.py#L28-L34

    This is an issue for PyKEEN, where the scores that come from a model could all be on the range of [-5,-2]. The current TODO text says to use https://en.wikipedia.org/wiki/Youden%27s_J_statistic, but it's not clear how that would be used.

    As an alternative, the NetMF package implements the following code for constructing an indicator that might be more applicable (though I don't personally recognize what method this is, and unfortunately it's not documented):

    def construct_indicator(y_score, y):
        # rank the labels by the scores directly
        num_label = np.sum(y, axis=1, dtype=np.int)
        y_sort = np.fliplr(np.argsort(y_score, axis=1))
        y_pred = np.zeros_like(y, dtype=np.int)
        for i in range(y.shape[0]):
            for j in range(num_label[i]):
                y_pred[i, y_sort[i, j]] = 1
        return y_pred
    
    opened by cthoyt 0
  • Add function keys and annotate ratings

    Add function keys and annotate ratings

    Summary

    This PR streamlines generating metric sets and annotates more functions.

    • [x] Code passes all tests
    • [x] Unit tests provided for these changes
    • [x] Documentation and docstrings added for these changes

    Changes

    • [x] Update the rexmex.utils.Annotator class to include a key. If it's not given, this defaults to the function's name. The registry now uses the key instead of the function's name
    • [x] Update the metric sets to load the function names directly from the keys in the annotator's dictionary
    • [x] Annotate functions in the ratings module
    • [x] Generalizes tests to make it easier to test the existence of annotations for ratings, coverage, and rankings
    opened by cthoyt 0
  • Adjusted mean rank

    Adjusted mean rank

    @mberr's adjusted mean rank address some of the problems with the mean rank, including its size dependence. Reference: https://arxiv.org/abs/2002.06914

    opened by cthoyt 3
Releases(v_00102)
  • v_00102(Sep 28, 2022)

    What's Changed

    • Docstring fix by @kajocina in https://github.com/AstraZeneca/rexmex/pull/47
    • added CoverageScoreCard to init by @kajocina in https://github.com/AstraZeneca/rexmex/pull/48
    • Remove broken link to examples by @benedekrozemberczki in https://github.com/AstraZeneca/rexmex/pull/50
    • Unfix pandas version in RexMex requirements by @GavEdwards in https://github.com/AstraZeneca/rexmex/pull/54
    • Release 0.1.2 by @GavEdwards in https://github.com/AstraZeneca/rexmex/pull/55

    Note: version 0.1.2 due to an issue with creating the 0.1.1 release.

    Full Changelog: https://github.com/AstraZeneca/rexmex/compare/v_00100...v_00102

    Source code(tar.gz)
    Source code(zip)
  • v_00100(Jan 7, 2022)

    What's Changed

    • Use registry pattern for ClassificationMetricSet by @cthoyt in https://github.com/AstraZeneca/rexmex/pull/40
    • Coverage refactor, added CoverageMetricSet and CoverageScoreCard by @kajocina in https://github.com/AstraZeneca/rexmex/pull/42
    • Annotate redundant functions by @cthoyt in https://github.com/AstraZeneca/rexmex/pull/41
    Source code(tar.gz)
    Source code(zip)
  • v_00015(Jan 4, 2022)

    What's Changed

    • Update name in citation by @cthoyt in https://github.com/AstraZeneca/rexmex/pull/39
    • Add binarize annotation by @cthoyt in https://github.com/AstraZeneca/rexmex/pull/36
    • Cleanup scorecard interface by @cthoyt in https://github.com/AstraZeneca/rexmex/pull/38
    • Improve testing by @cthoyt in https://github.com/AstraZeneca/rexmex/pull/37

    Full Changelog: https://github.com/AstraZeneca/rexmex/compare/v_00014...v_00015

    Source code(tar.gz)
    Source code(zip)
  • v_00014(Jan 4, 2022)

    What's Changed 🦖🦖

    • Demonstrate annotating structured metadata to classification functions by @cthoyt in https://github.com/AstraZeneca/rexmex/pull/29
    • Add additional classification function annotations by @cthoyt in https://github.com/AstraZeneca/rexmex/pull/35

    Full Changelog: https://github.com/AstraZeneca/rexmex/compare/v_00013...v_00014

    Source code(tar.gz)
    Source code(zip)
  • v_00013(Dec 13, 2021)

  • v_00012(Dec 10, 2021)

  • v_00011(Dec 7, 2021)

  • v_00010(Dec 6, 2021)

  • v_00009(Dec 2, 2021)

  • v_0007(Nov 29, 2021)

  • v_00008(Nov 29, 2021)

  • v_00006(Nov 25, 2021)

    The new release separates metrics and creates namespaces based on metric categories. This helps with modularity and organization.

    Results in namespaces for:

    • Rating
    • Classification
    • Ranking
    • Coverage
    Source code(tar.gz)
    Source code(zip)
  • v_00005(Nov 24, 2021)

    Library now includes:

    • Positive and negative likelihood ratio
    • Informedness and markedness
    • Threat score and critical success index
    • Fowlkes - Mallows index
    • Prevalence threshold
    • Diagnostic odds ratio
    Source code(tar.gz)
    Source code(zip)
  • v_00004(Nov 23, 2021)

    The new release covers:

    • False Negative/Positive
    • True Positive/Negative
    • FPR, TPR, FNR, TNR
    • Specificity, Selectivity, False Omission Rate, False Discovery Rate
    • Miss Rate, Fall Out
    • Positive Predictive Value, Negative Predictive Value
    Source code(tar.gz)
    Source code(zip)
  • v_00003(Nov 22, 2021)

    New features and bug fixes:

    • Normalization of targets
    • Metric set behaviour changed
    • New dataset for testing
    • Completed test coverage
    • Updated setup with tags and licensing
    Source code(tar.gz)
    Source code(zip)
  • v_00001(Nov 22, 2021)

Owner
AstraZeneca
Data and AI: Unlocking new science insights
AstraZeneca
E-Commerce recommender demo with real-time data and a graph database

🔍 E-Commerce recommender demo 🔍 This is a simple stream setup that uses Memgraph to ingest real-time data from a simulated online store. Data is str

g-despot 3 Feb 23, 2022
Real time recommendation playground

concierge A continuous learning collaborative filter1 deployed with a light web server2. Distributed updates are live (real time pubsub + delta traini

Mark Essel 16 Nov 07, 2022
Accuracy-Diversity Trade-off in Recommender Systems via Graph Convolutions

Accuracy-Diversity Trade-off in Recommender Systems via Graph Convolutions This repository contains the code of the paper "Accuracy-Diversity Trade-of

2 Sep 16, 2022
A Python implementation of LightFM, a hybrid recommendation algorithm.

LightFM Build status Linux OSX (OpenMP disabled) Windows (OpenMP disabled) LightFM is a Python implementation of a number of popular recommendation al

Lyst 4.2k Jan 02, 2023
NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIA’s GPUs.

NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in

420 Jan 04, 2023
A tensorflow implementation of the RecoGCN model in a CIKM'19 paper, titled with "Relation-Aware Graph Convolutional Networks for Agent-Initiated Social E-Commerce Recommendation".

This repo contains a tensorflow implementation of RecoGCN and the experiment dataset Running the RecoGCN model python train.py Example training outp

xfl15 30 Nov 25, 2022
Attentive Social Recommendation: Towards User And Item Diversities

ASR This is a Tensorflow implementation of the paper: Attentive Social Recommendation: Towards User And Item Diversities Preprint, https://arxiv.org/a

Dongsheng Luo 1 Nov 14, 2021
Knowledge-aware Coupled Graph Neural Network for Social Recommendation

KCGN AAAI-2021 《Knowledge-aware Coupled Graph Neural Network for Social Recommendation》 Environments python 3.8 pytorch-1.6 DGL 0.5.3 (https://github.

xhc 22 Nov 18, 2022
reXmeX is recommender system evaluation metric library.

A general purpose recommender metrics library for fair evaluation.

AstraZeneca 258 Dec 22, 2022
This is our Tensorflow implementation for "Graph-based Embedding Smoothing for Sequential Recommendation" (GES) (TKDE, 2021).

Graph-based Embedding Smoothing (GES) This is our Tensorflow implementation for the paper: Tianyu Zhu, Leilei Sun, and Guoqing Chen. "Graph-based Embe

Tianyu Zhu 15 Nov 29, 2022
A framework for large scale recommendation algorithms.

A framework for large scale recommendation algorithms.

Alibaba Group - PAI 880 Jan 03, 2023
Persine is an automated tool to study and reverse-engineer algorithmic recommendation systems.

Persine, the Persona Engine Persine is an automated tool to study and reverse-engineer algorithmic recommendation systems. It has a simple interface a

Jonathan Soma 87 Nov 29, 2022
An Efficient and Effective Framework for Session-based Social Recommendation

SEFrame This repository contains the code for the paper "An Efficient and Effective Framework for Session-based Social Recommendation". Requirements P

Tianwen CHEN 23 Oct 26, 2022
Beyond Clicks: Modeling Multi-Relational Item Graph for Session-Based Target Behavior Prediction

MGNN-SPred This is our Tensorflow implementation for the paper: WenWang,Wei Zhang, Shukai Liu, Qi Liu, Bo Zhang, Leyu Lin, and Hongyuan Zha. 2020. Bey

Wen Wang 18 Jan 02, 2023
Continuous-Time Sequential Recommendation with Temporal Graph Collaborative Transformer

Introduction This is the repository of our accepted CIKM 2021 paper "Continuous-Time Sequential Recommendation with Temporal Graph Collaborative Trans

SeqRec 29 Dec 09, 2022
This is our implementation of GHCF: Graph Heterogeneous Collaborative Filtering (AAAI 2021)

GHCF This is our implementation of the paper: Chong Chen, Weizhi Ma, Min Zhang, Zhaowei Wang, Xiuqiang He, Chenyang Wang, Yiqun Liu and Shaoping Ma. 2

Chong Chen 53 Dec 05, 2022
A library of Recommender Systems

A library of Recommender Systems This repository provides a summary of our research on Recommender Systems. It includes our code base on different rec

MilaGraph 980 Jan 05, 2023
Spark-movie-lens - An on-line movie recommender using Spark, Python Flask, and the MovieLens dataset

A scalable on-line movie recommender using Spark and Flask This Apache Spark tutorial will guide you step-by-step into how to use the MovieLens datase

Jose A Dianes 794 Dec 23, 2022
Recommender System Papers

Included Conferences: SIGIR 2020, SIGKDD 2020, RecSys 2020, CIKM 2020, AAAI 2021, WSDM 2021, WWW 2021

RUCAIBox 704 Jan 06, 2023
Jointly Learning Explainable Rules for Recommendation with Knowledge Graph

Jointly Learning Explainable Rules for Recommendation with Knowledge Graph

57 Nov 03, 2022