Semantic Scholar's Author Disambiguation Algorithm & Evaluation Suite

Related tags

Deep LearningS2AND
Overview

S2AND

This repository provides access to the S2AND dataset and S2AND reference model described in the paper S2AND: A Benchmark and Evaluation System for Author Name Disambiguation by Shivashankar Subramanian, Daniel King, Doug Downey, Sergey Feldman (https://arxiv.org/abs/2103.07534).

The reference model will be live on semanticscholar.org later this year, but the trained model is available now as part of the data download (see below).

Installation

To install this package, run the following:

git clone https://github.com/allenai/S2AND.git
cd S2AND
conda create -y --name s2and python==3.7
conda activate s2and
pip install -r requirements.in
pip install -e .

To obtain the training data, run this command after the package is installed (from inside the S2AND directory):
[Expected download size is: 50.4 GiB]

aws s3 sync --no-sign-request s3://ai2-s2-research-public/s2and-release data/

If you run into cryptic errors about GCC on macOS while installing the requirments, try this instead:

CFLAGS='-stdlib=libc++' pip install -r requirements.in

Configuration

Modify the config file at data/path_config.json. This file should look like this

{
    "main_data_dir": "absolute path to wherever you downloaded the data to",
    "internal_data_dir": "ignore this one unless you work at AI2"
}

As the dummy file says, main_data_dir should be set to the location of wherever you downloaded the data to, and internal_data_dir can be ignored, as it is used for some scripts that rely on unreleased data, internal to Semantic Scholar.

How to use S2AND for loading data and training a model

Once you have downloaded the datasets, you can go ahead and load up one of them:

from os.path import join
from s2and.data import ANDData

dataset_name = "pubmed"
parent_dir = "data/pubmed/
dataset = ANDData(
    signatures=join(parent_dir, f"{dataset_name}_signatures.json"),
    papers=join(parent_dir, f"{dataset_name}_papers.json"),
    mode="train",
    specter_embeddings=join(parent_dir, f"{dataset_name}_specter.pickle"),
    clusters=join(parent_dir, f"{dataset_name}_clusters.json"),
    block_type="s2",
    train_pairs_size=100000,
    val_pairs_size=10000,
    test_pairs_size=10000,
    name=dataset_name,
    n_jobs=8,
)

This may take a few minutes - there is a lot of text pre-processing to do.

The first step in the S2AND pipeline is to specify a featurizer and then train a binary classifier that tries to guess whether two signatures are referring to the same person.

We'll do hyperparameter selection with the validation set and then get the test area under ROC curve.

Here's how to do all that:

from s2and.model import PairwiseModeler
from s2and.featurizer import FeaturizationInfo
from s2and.eval import pairwise_eval

featurization_info = FeaturizationInfo()
# the cache will make it faster to train multiple times - it stores the features on disk for you
train, val, test = featurize(dataset, featurization_info, n_jobs=8, use_cache=True)
X_train, y_train = train
X_val, y_val = val
X_test, y_test = test

# calibration fits isotonic regression after the binary classifier is fit
# monotone constraints help the LightGBM classifier behave sensibly
pairwise_model = PairwiseModeler(
    n_iter=25, calibrate=True, monotone_constraints=featurization_info.lightgbm_monotone_constraints
)
# this does hyperparameter selection, which is why we need to pass in the validation set.
pairwise_model.fit(X_train, y_train, X_val, y_val)

# this will also dump a lot of useful plots (ROC, PR, SHAP) to the figs_path
pairwise_metrics = pairwise_eval(X_test, y_test, pairwise_model.classifier, figs_path='figs/', title='example')
print(pairwise_metrics)

The second stage in the S2AND pipeline is to tune hyperparameters for the clusterer on the validation data and then evaluate the full clustering pipeline on the test blocks.

We use agglomerative clustering as implemented in fastcluster with average linkage. There is only one hyperparameter to tune.

from s2and.model import Clusterer, FastCluster
from hyperopt import hp

clusterer = Clusterer(
    featurization_info,
    pairwise_model,
    cluster_model=FastCluster(linkage="average"),
    search_space={"eps": hp.uniform("eps", 0, 1)},
    n_iter=25,
    n_jobs=8,
)
clusterer.fit(dataset)

# the metrics_per_signature are there so we can break out the facets if needed
metrics, metrics_per_signature = cluster_eval(dataset, clusterer)
print(metrics)

For a fuller example, please see the transfer script: scripts/transfer_experiment.py.

How to use S2AND for predicting with a saved model

Assuming you have a clusterer already fit, you can dump the model to disk like so

import pickle

with open("saved_model.pkl", "wb") as _pkl_file:
    pickle.dump(clusterer, _pkl_file)

You can then reload it, load a new dataset, and run prediction

import pickle

with open("saved_model.pkl", "rb") as _pkl_file:
    clusterer = pickle.load(_pkl_file)

anddata = ANDData(
    signatures=signatures,
    papers=papers,
    specter_embeddings=paper_embeddings,
    name="your_name_here",
    mode="inference",
    block_type="s2",
)
pred_clusters, pred_distance_matrices = clusterer.predict(anddata.get_blocks(), anddata)

Our released models are in the s3 folder referenced above, and are called production_model.pickle and full_union_seed_*.pickle. They can be loaded the same way, except that the pickled object is a dictionary, with a clusterer key.

Incremental prediction

There is a also a predict_incremental function on the Clusterer, that allows prediction for just a small set of new signatures. When instantiating ANDData, you can pass in cluster_seeds, which will be used instead of model predictions for those signatures. If you call predict_incremental, the full distance matrix will not be created, and the new signatures will simply be assigned to the cluster they have the lowest average distance to, as long as it is below the model's eps, or separately reclustered with the other unassigned signatures, if not within eps of any existing cluster.

Reproducibility

The experiments in the paper were run with the python (3.7.9) package versions in paper_experiments_env.txt. You can install these packages exactly by running pip install pip==21.0.0 and then pip install -r paper_experiments_env.txt --use-feature=fast-deps --use-deprecated=legacy-resolver. Rerunning on the branch s2and_paper should produce the same numbers as in the paper (we will udpate here if this becomes not true).

Licensing

The code in this repo is released under the Apache 2.0 license (license included in the repo. The dataset is released under ODC-BY (included in S3 bucket with the data). We would also like to acknowledge that some of the affiliations data comes directly from the Microsoft Academic Graph (https://aka.ms/msracad).

Citation

@misc{subramanian2021s2and, title={S2AND: A Benchmark and Evaluation System for Author Name Disambiguation}, author={Shivashankar Subramanian and Daniel King and Doug Downey and Sergey Feldman}, year={2021}, eprint={2103.07534}, archivePrefix={arXiv}, primaryClass={cs.DL} }

Comments
  • Find some wrong labels in dataset?

    Find some wrong labels in dataset?

    For example, in Pubmed dataset, in "clusters.json" file, There is a cluster “PM_352”: ['18834', '18835', '18836', '18837', '18838', '18839', '18840', '18841']. But I checked from "signatures.json", since '18834' in given_block "z zhang" while '18836' is in given_block "d zhang", how could they be in a same cluster? Is anything I misunderstand?

    opened by hapoyige 15
  • Add extra name incompatibility check

    Add extra name incompatibility check

    This PR attempts prevent new name incompatibilities from being added to a cluster. So if a claimed cluster contains S Govender and Sharlene Govender, s2and might break that claimed cluster up into two, and then attach Suendharan Govender to the S Govender piece, and then we we remerge, we have a cluster with S Govender, Sharlene Govender, and Suendharan Govender. I suspect this is the issue behind https://github.com/allenai/scholar/issues/27801#issuecomment-847397953, but did not verify that.

    opened by dakinggg 5
  • Question: Can predictions run in multi-core

    Question: Can predictions run in multi-core

    I see that the current implementation of the prediction using the production model will run on a single-core which is very slow when working with larger datasets. I was wondering if there is some already explored way of doing this using multiple cores if not a GPU?

    opened by jinamshah 4
  • global_dataset trick not working?

    global_dataset trick not working?

    @dakinggg I've got a branch going to make S2AND work for paper deduplication. I haven't really messed with your global_dataset trick (I think), but now it stopped working if n_jobs > 1. Works fine for when it's in serial.

    Test fails with FAILED tests/test_featurizer.py::TestData::test_featurizer - NameError: name 'global_dataset' is not defined

    Did you run into this when making it work originally? Any ideas?

    opened by sergeyf 2
  • Question: How does one go about converting their own dataset to the one used for training

    Question: How does one go about converting their own dataset to the one used for training

    Hello, I understand that this is not technically an issue but I just want to understand how to convert a dataset of my own ( that has information like the research paper name, the authors' details like name,affiliation, email id etc) to a dataset that can be consumed for training from scratch.

    opened by jinamshah 2
  • No cluster.json in the medline dataset.

    No cluster.json in the medline dataset.

    I found that medline dataset does not contain "medline_cluster.json" file which prevents me to reproduce the results. Please add the cluster.json file to S2AND.

    opened by skojaku 1
  • Link to evaluation dataset

    Link to evaluation dataset

    Thank you for this excellent open AND-algorithm and data!

    I followed the link from the paper to this repository, but I was not able to find the S2AND dataset. Could you add some help to the readme, please?

    opened by tomthe 1
  • Update readme.md

    Update readme.md

    I added intro language and it includes reference to the saved models. Are these uploaded already? If so, can you add a commit somewhere in the readme about it and maybe a short example about how to load them?

    opened by sergeyf 1
  • Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought

    Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought "block" came from original source data and "given_block" is the modified version of S2AND since the number of statistics matches the #Block in TableⅡ in the S2AND paper. Any suggestions?

    Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought "block" came from original source data and "given_block" is the modified version of S2AND since the number of statistics matches the #Block in TableⅡ in the S2AND paper. Any suggestions?

    Originally posted by @hapoyige in https://github.com/allenai/S2AND/issues/25#issuecomment-1046418074

    opened by hapoyige 0
  • Be more explicit about use_cache to avoid

    Be more explicit about use_cache to avoid

    Zhipeng and the SPECTER+ team missed the cache specification and were debugging for a long time. These changes should hopefully make the cache easier to understand and notice.

    opened by sergeyf 0
  • Incremental bug

    Incremental bug

    Fixes an issue with the incremental code clustering where we were not splitting claimed profiles properly to align with the expected s2and output. The result was the incompatible clusters resulting from claims remained incompatible, and new mentions could not be assigned to them.

    opened by dakinggg 0
  • Future improvements

    Future improvements

    • [ ] Unify the set of languages between cld2 and fasttext (see unify_lang branch for a start)
    • [ ] Audit the list of name pairs (noticed (maria, mary), (kathleen, katherine))
    • [ ] Generally improve language detection on titles (would require a whole model)
    • [ ] if a person has two very disjoint "personas", they will end up as two clusters. Probably not resolvable, but putting here anyway
    • [ ] somehow do better with low information papers (e.g. no abstract, venue, affiliation, references)
    opened by dakinggg 0
Releases(v1.1_no_refs)
Python program that works as a contact list

Lista de Contatos Programa em Python que funciona como uma lista de contatos. Features Adicionar novo contato Remover contato Atualizar contato Pesqui

Victor B. Lino 3 Dec 16, 2021
Pytorch code for our paper "Feedback Network for Image Super-Resolution" (CVPR2019)

Feedback Network for Image Super-Resolution [arXiv] [CVF] [Poster] Update: Our proposed Gated Multiple Feedback Network (GMFN) will appear in BMVC2019

Zhen Li 539 Jan 06, 2023
The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

Bingoren 49 Dec 01, 2022
Official implementation of FCL-taco2: Fast, Controllable and Lightweight version of Tacotron2 @ ICASSP 2021

FCL-Taco2: Towards Fast, Controllable and Lightweight Text-to-Speech synthesis (ICASSP 2021) Paper | Demo Block diagram of FCL-taco2, where the decode

Disong Wang 39 Sep 28, 2022
Voice control for Garry's Mod

WIP: Talonvoice GMod integrations Very work in progress voice control demo for Garry's Mod. HOWTO Install https://talonvoice.com/ Press https://i.imgu

Meta Construct 5 Nov 15, 2022
HyperDict - Self linked dictionary in Python

Hyper Dictionary Advanced python dictionary(hash-table), which can link it-self

8 Feb 06, 2022
The 2nd place solution of 2021 google landmark retrieval on kaggle.

Leaderboard, taxonomy, and curated list of few-shot object detection papers.

229 Dec 13, 2022
Robust Consistent Video Depth Estimation

[CVPR 2021] Robust Consistent Video Depth Estimation This repository contains Python and C++ implementation of Robust Consistent Video Depth, as descr

Facebook Research 213 Dec 17, 2022
Bayesian regularization for functional graphical models.

BayesFGM Paper: Jiajing Niu, Andrew Brown. Bayesian regularization for functional graphical models. Requirements R version 3.6.3 and up Python 3.6 and

0 Oct 07, 2021
This repository provides an efficient PyTorch-based library for training deep models.

s3sec Test AWS S3 buckets for read/write/delete access This tool was developed to quickly test a list of s3 buckets for public read, write and delete

Bytedance Inc. 123 Jan 05, 2023
Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation

Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation Introduction ACoSP is an online pruning algorithm that compr

Merantix 8 Dec 07, 2022
The open-source and free to use Python package miseval was developed to establish a standardized medical image segmentation evaluation procedure

miseval: a metric library for Medical Image Segmentation EVALuation The open-source and free to use Python package miseval was developed to establish

59 Dec 10, 2022
RefineGNN - Iterative refinement graph neural network for antibody sequence-structure co-design (RefineGNN)

Iterative refinement graph neural network for antibody sequence-structure co-des

Wengong Jin 83 Dec 31, 2022
State of the Art Neural Networks for Generative Deep Learning

pyradox-generative State of the Art Neural Networks for Generative Deep Learning Table of Contents pyradox-generative Table of Contents Installation U

Ritvik Rastogi 8 Sep 29, 2022
An implementation of chunked, compressed, N-dimensional arrays for Python.

Zarr Latest Release Package Status License Build Status Coverage Downloads Gitter Citation What is it? Zarr is a Python package providing an implement

Zarr Developers 1.1k Dec 30, 2022
PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data.

Anti-Backdoor Learning PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data. The Anti-Backdoor Learning

Yige-Li 51 Dec 07, 2022
Pytorch implementations of the paper Value Functions Factorization with Latent State Information Sharing in Decentralized Multi-Agent Policy Gradients

LSF-SAC Pytorch implementations of the paper Value Functions Factorization with Latent State Information Sharing in Decentralized Multi-Agent Policy G

Hanhan 2 Aug 14, 2022
Compressed Video Action Recognition

Compressed Video Action Recognition Chao-Yuan Wu, Manzil Zaheer, Hexiang Hu, R. Manmatha, Alexander J. Smola, Philipp Krähenbühl. In CVPR, 2018. [Proj

Chao-Yuan Wu 479 Dec 26, 2022
Anti-UAV base on PaddleDetection

Paddle-Anti-UAV Anti-UAV base on PaddleDetection Background UAVs are very popular and we can see them in many public spaces, such as parks and playgro

Qingzhong Wang 2 Apr 20, 2022
Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth

Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth This codebase implements the loss function described in: Insta

209 Dec 07, 2022