Semantic Scholar's Author Disambiguation Algorithm & Evaluation Suite

Related tags

Deep LearningS2AND
Overview

S2AND

This repository provides access to the S2AND dataset and S2AND reference model described in the paper S2AND: A Benchmark and Evaluation System for Author Name Disambiguation by Shivashankar Subramanian, Daniel King, Doug Downey, Sergey Feldman (https://arxiv.org/abs/2103.07534).

The reference model will be live on semanticscholar.org later this year, but the trained model is available now as part of the data download (see below).

Installation

To install this package, run the following:

git clone https://github.com/allenai/S2AND.git
cd S2AND
conda create -y --name s2and python==3.7
conda activate s2and
pip install -r requirements.in
pip install -e .

To obtain the training data, run this command after the package is installed (from inside the S2AND directory):
[Expected download size is: 50.4 GiB]

aws s3 sync --no-sign-request s3://ai2-s2-research-public/s2and-release data/

If you run into cryptic errors about GCC on macOS while installing the requirments, try this instead:

CFLAGS='-stdlib=libc++' pip install -r requirements.in

Configuration

Modify the config file at data/path_config.json. This file should look like this

{
    "main_data_dir": "absolute path to wherever you downloaded the data to",
    "internal_data_dir": "ignore this one unless you work at AI2"
}

As the dummy file says, main_data_dir should be set to the location of wherever you downloaded the data to, and internal_data_dir can be ignored, as it is used for some scripts that rely on unreleased data, internal to Semantic Scholar.

How to use S2AND for loading data and training a model

Once you have downloaded the datasets, you can go ahead and load up one of them:

from os.path import join
from s2and.data import ANDData

dataset_name = "pubmed"
parent_dir = "data/pubmed/
dataset = ANDData(
    signatures=join(parent_dir, f"{dataset_name}_signatures.json"),
    papers=join(parent_dir, f"{dataset_name}_papers.json"),
    mode="train",
    specter_embeddings=join(parent_dir, f"{dataset_name}_specter.pickle"),
    clusters=join(parent_dir, f"{dataset_name}_clusters.json"),
    block_type="s2",
    train_pairs_size=100000,
    val_pairs_size=10000,
    test_pairs_size=10000,
    name=dataset_name,
    n_jobs=8,
)

This may take a few minutes - there is a lot of text pre-processing to do.

The first step in the S2AND pipeline is to specify a featurizer and then train a binary classifier that tries to guess whether two signatures are referring to the same person.

We'll do hyperparameter selection with the validation set and then get the test area under ROC curve.

Here's how to do all that:

from s2and.model import PairwiseModeler
from s2and.featurizer import FeaturizationInfo
from s2and.eval import pairwise_eval

featurization_info = FeaturizationInfo()
# the cache will make it faster to train multiple times - it stores the features on disk for you
train, val, test = featurize(dataset, featurization_info, n_jobs=8, use_cache=True)
X_train, y_train = train
X_val, y_val = val
X_test, y_test = test

# calibration fits isotonic regression after the binary classifier is fit
# monotone constraints help the LightGBM classifier behave sensibly
pairwise_model = PairwiseModeler(
    n_iter=25, calibrate=True, monotone_constraints=featurization_info.lightgbm_monotone_constraints
)
# this does hyperparameter selection, which is why we need to pass in the validation set.
pairwise_model.fit(X_train, y_train, X_val, y_val)

# this will also dump a lot of useful plots (ROC, PR, SHAP) to the figs_path
pairwise_metrics = pairwise_eval(X_test, y_test, pairwise_model.classifier, figs_path='figs/', title='example')
print(pairwise_metrics)

The second stage in the S2AND pipeline is to tune hyperparameters for the clusterer on the validation data and then evaluate the full clustering pipeline on the test blocks.

We use agglomerative clustering as implemented in fastcluster with average linkage. There is only one hyperparameter to tune.

from s2and.model import Clusterer, FastCluster
from hyperopt import hp

clusterer = Clusterer(
    featurization_info,
    pairwise_model,
    cluster_model=FastCluster(linkage="average"),
    search_space={"eps": hp.uniform("eps", 0, 1)},
    n_iter=25,
    n_jobs=8,
)
clusterer.fit(dataset)

# the metrics_per_signature are there so we can break out the facets if needed
metrics, metrics_per_signature = cluster_eval(dataset, clusterer)
print(metrics)

For a fuller example, please see the transfer script: scripts/transfer_experiment.py.

How to use S2AND for predicting with a saved model

Assuming you have a clusterer already fit, you can dump the model to disk like so

import pickle

with open("saved_model.pkl", "wb") as _pkl_file:
    pickle.dump(clusterer, _pkl_file)

You can then reload it, load a new dataset, and run prediction

import pickle

with open("saved_model.pkl", "rb") as _pkl_file:
    clusterer = pickle.load(_pkl_file)

anddata = ANDData(
    signatures=signatures,
    papers=papers,
    specter_embeddings=paper_embeddings,
    name="your_name_here",
    mode="inference",
    block_type="s2",
)
pred_clusters, pred_distance_matrices = clusterer.predict(anddata.get_blocks(), anddata)

Our released models are in the s3 folder referenced above, and are called production_model.pickle and full_union_seed_*.pickle. They can be loaded the same way, except that the pickled object is a dictionary, with a clusterer key.

Incremental prediction

There is a also a predict_incremental function on the Clusterer, that allows prediction for just a small set of new signatures. When instantiating ANDData, you can pass in cluster_seeds, which will be used instead of model predictions for those signatures. If you call predict_incremental, the full distance matrix will not be created, and the new signatures will simply be assigned to the cluster they have the lowest average distance to, as long as it is below the model's eps, or separately reclustered with the other unassigned signatures, if not within eps of any existing cluster.

Reproducibility

The experiments in the paper were run with the python (3.7.9) package versions in paper_experiments_env.txt. You can install these packages exactly by running pip install pip==21.0.0 and then pip install -r paper_experiments_env.txt --use-feature=fast-deps --use-deprecated=legacy-resolver. Rerunning on the branch s2and_paper should produce the same numbers as in the paper (we will udpate here if this becomes not true).

Licensing

The code in this repo is released under the Apache 2.0 license (license included in the repo. The dataset is released under ODC-BY (included in S3 bucket with the data). We would also like to acknowledge that some of the affiliations data comes directly from the Microsoft Academic Graph (https://aka.ms/msracad).

Citation

@misc{subramanian2021s2and, title={S2AND: A Benchmark and Evaluation System for Author Name Disambiguation}, author={Shivashankar Subramanian and Daniel King and Doug Downey and Sergey Feldman}, year={2021}, eprint={2103.07534}, archivePrefix={arXiv}, primaryClass={cs.DL} }

Comments
  • Find some wrong labels in dataset?

    Find some wrong labels in dataset?

    For example, in Pubmed dataset, in "clusters.json" file, There is a cluster “PM_352”: ['18834', '18835', '18836', '18837', '18838', '18839', '18840', '18841']. But I checked from "signatures.json", since '18834' in given_block "z zhang" while '18836' is in given_block "d zhang", how could they be in a same cluster? Is anything I misunderstand?

    opened by hapoyige 15
  • Add extra name incompatibility check

    Add extra name incompatibility check

    This PR attempts prevent new name incompatibilities from being added to a cluster. So if a claimed cluster contains S Govender and Sharlene Govender, s2and might break that claimed cluster up into two, and then attach Suendharan Govender to the S Govender piece, and then we we remerge, we have a cluster with S Govender, Sharlene Govender, and Suendharan Govender. I suspect this is the issue behind https://github.com/allenai/scholar/issues/27801#issuecomment-847397953, but did not verify that.

    opened by dakinggg 5
  • Question: Can predictions run in multi-core

    Question: Can predictions run in multi-core

    I see that the current implementation of the prediction using the production model will run on a single-core which is very slow when working with larger datasets. I was wondering if there is some already explored way of doing this using multiple cores if not a GPU?

    opened by jinamshah 4
  • global_dataset trick not working?

    global_dataset trick not working?

    @dakinggg I've got a branch going to make S2AND work for paper deduplication. I haven't really messed with your global_dataset trick (I think), but now it stopped working if n_jobs > 1. Works fine for when it's in serial.

    Test fails with FAILED tests/test_featurizer.py::TestData::test_featurizer - NameError: name 'global_dataset' is not defined

    Did you run into this when making it work originally? Any ideas?

    opened by sergeyf 2
  • Question: How does one go about converting their own dataset to the one used for training

    Question: How does one go about converting their own dataset to the one used for training

    Hello, I understand that this is not technically an issue but I just want to understand how to convert a dataset of my own ( that has information like the research paper name, the authors' details like name,affiliation, email id etc) to a dataset that can be consumed for training from scratch.

    opened by jinamshah 2
  • No cluster.json in the medline dataset.

    No cluster.json in the medline dataset.

    I found that medline dataset does not contain "medline_cluster.json" file which prevents me to reproduce the results. Please add the cluster.json file to S2AND.

    opened by skojaku 1
  • Link to evaluation dataset

    Link to evaluation dataset

    Thank you for this excellent open AND-algorithm and data!

    I followed the link from the paper to this repository, but I was not able to find the S2AND dataset. Could you add some help to the readme, please?

    opened by tomthe 1
  • Update readme.md

    Update readme.md

    I added intro language and it includes reference to the saved models. Are these uploaded already? If so, can you add a commit somewhere in the readme about it and maybe a short example about how to load them?

    opened by sergeyf 1
  • Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought

    Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought "block" came from original source data and "given_block" is the modified version of S2AND since the number of statistics matches the #Block in TableⅡ in the S2AND paper. Any suggestions?

    Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought "block" came from original source data and "given_block" is the modified version of S2AND since the number of statistics matches the #Block in TableⅡ in the S2AND paper. Any suggestions?

    Originally posted by @hapoyige in https://github.com/allenai/S2AND/issues/25#issuecomment-1046418074

    opened by hapoyige 0
  • Be more explicit about use_cache to avoid

    Be more explicit about use_cache to avoid

    Zhipeng and the SPECTER+ team missed the cache specification and were debugging for a long time. These changes should hopefully make the cache easier to understand and notice.

    opened by sergeyf 0
  • Incremental bug

    Incremental bug

    Fixes an issue with the incremental code clustering where we were not splitting claimed profiles properly to align with the expected s2and output. The result was the incompatible clusters resulting from claims remained incompatible, and new mentions could not be assigned to them.

    opened by dakinggg 0
  • Future improvements

    Future improvements

    • [ ] Unify the set of languages between cld2 and fasttext (see unify_lang branch for a start)
    • [ ] Audit the list of name pairs (noticed (maria, mary), (kathleen, katherine))
    • [ ] Generally improve language detection on titles (would require a whole model)
    • [ ] if a person has two very disjoint "personas", they will end up as two clusters. Probably not resolvable, but putting here anyway
    • [ ] somehow do better with low information papers (e.g. no abstract, venue, affiliation, references)
    opened by dakinggg 0
Releases(v1.1_no_refs)
Functional TensorFlow Implementation of Singular Value Decomposition for paper Fast Graph Learning

tf-fsvd TensorFlow Implementation of Functional Singular Value Decomposition for paper Fast Graph Learning with Unique Optimal Solutions Cite If you f

Sami Abu-El-Haija 14 Nov 25, 2021
Implementation of our paper 'RESA: Recurrent Feature-Shift Aggregator for Lane Detection' in AAAI2021.

RESA PyTorch implementation of the paper "RESA: Recurrent Feature-Shift Aggregator for Lane Detection". Our paper has been accepted by AAAI2021. Intro

137 Jan 02, 2023
SmoothGrad implementation in PyTorch

SmoothGrad implementation in PyTorch PyTorch implementation of SmoothGrad: removing noise by adding noise. Vanilla Gradients SmoothGrad Guided backpro

SSKH 143 Jan 05, 2023
A Transformer-Based Siamese Network for Change Detection

ChangeFormer: A Transformer-Based Siamese Network for Change Detection (Under review at IGARSS-2022) Wele Gedara Chaminda Bandara, Vishal M. Patel Her

Wele Gedara Chaminda Bandara 214 Dec 29, 2022
Official repository for "Orthogonal Projection Loss" (ICCV'21)

Orthogonal Projection Loss (ICCV'21) Kanchana Ranasinghe, Muzammal Naseer, Munawar Hayat, Salman Khan, & Fahad Shahbaz Khan Paper Link | Project Page

Kanchana Ranasinghe 83 Dec 26, 2022
[ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models

Towards Understanding and Mitigating Social Biases in Language Models This repo contains code and data for evaluating and mitigating bias from generat

Paul Liang 42 Jan 03, 2023
RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation

RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation YouTube | BiliBili 16X interpolation results from two input images: Introd

旷视天元 MegEngine 28 Dec 09, 2022
Final report with code for KAIST Course KSE 801.

Orthogonal collocation is a method for the numerical solution of partial differential equations

Chuanbo HUA 4 Apr 06, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 02, 2023
Code for our paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021

SimCLS Code for our paper: "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021 1. How to Install Requirements

Yixin Liu 150 Dec 12, 2022
African language Speech Recognition - Speech-to-Text

Swahili-Speech-To-Text Table of Contents Swahili-Speech-To-Text Overview Scenario Approach Project Structure data: models: notebooks: scripts tests: l

2 Jan 05, 2023
MPLP: Metapath-Based Label Propagation for Heterogenous Graphs

MPLP: Metapath-Based Label Propagation for Heterogenous Graphs Results on MAG240M Here, we demonstrate the following performance on the MAG240M datase

Qiuying Peng 10 Jun 28, 2022
Aesara is a Python library that allows one to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays.

Aesara is a Python library that allows one to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays.

Aesara 898 Jan 07, 2023
Code for paper "A Critical Assessment of State-of-the-Art in Entity Alignment" (https://arxiv.org/abs/2010.16314)

A Critical Assessment of State-of-the-Art in Entity Alignment This repository contains the source code for the paper A Critical Assessment of State-of

Max Berrendorf 16 Oct 14, 2022
Aggragrating Nested Transformer Official Jax Implementation

NesT is a simple method, which aggragrates nested local transformers on image blocks. The idea makes vision transformers attain better accuracy, data efficiency, and convergence on the ImageNet bench

Google Research 169 Dec 20, 2022
Face recognition project by matching the features extracted using SIFT.

MV_FaceDetectionWithSIFT Face recognition project by matching the features extracted using SIFT. By : Aria Radmehr Professor : Ali Amiri Dependencies

Aria Radmehr 4 May 31, 2022
Memory-Augmented Model Predictive Control

Memory-Augmented Model Predictive Control This repository hosts the source code for the journal article "Composing MPC with LQR and Neural Networks fo

Fangyu Wu 1 Jun 19, 2022
A High-Level Fusion Scheme for Circular Quantities published at the 20th International Conference on Advanced Robotics

Monte Carlo Simulation to the Paper A High-Level Fusion Scheme for Circular Quantities published at the 20th International Conference on Advanced Robotics

Sören Kohnert 0 Dec 06, 2021
InferPy: Deep Probabilistic Modeling with Tensorflow Made Easy

InferPy: Deep Probabilistic Modeling Made Easy InferPy is a high-level API for probabilistic modeling written in Python and capable of running on top

PGM-Lab 141 Oct 13, 2022