A simple tool to update bib entries with their official information (e.g., DBLP or the ACL anthology).

Overview

Rebiber: A tool for normalizing bibtex with official info.

We often cite papers using their arXiv versions without noting that they are already PUBLISHED in some conferences. These unofficial bib entries might violate rules about submissions or camera-ready versions for some conferences. We introduce Rebiber, a simple tool in Python to fix them automatically. It is based on the official conference information from the DBLP or the ACL anthology (for NLP confernces)! You can check the list of supported conferences here. Apart from handling outdated arXiv citations, Rebiber also normalizes citations in a unified way (DBLP-style), supporting abbreviation and value selection.

You can use this google colab notebook as a simple web demo.

Changelogs

  • 2021.02.08 We now support multiple useful feaures: 1) turning off some certain values, e.g., "-r url,pages,address" for removing the values from the output, 2) using abbr. to shorten the booktile values, e.g., Proceedings of the .* Annual Meeting of the Association for Computational Linguistics --> Proc. of ACL. More examples are here.
  • 2021.01.30 We build a colab notebook as a simple web demo. link

Installation

pip install rebiber -U
rebiber --update  # update the bib data and the abbr. info 

OR

git clone https://github.com/yuchenlin/rebiber.git
cd rebiber/
pip install -e .

If you would like to use the latest github version with more bug fixes, please use the second installation method.

Usage(v1.1.1)

Normalize your bibtex file with the official converence information:

rebiber -i /path/to/input.bib -o /path/to/output.bib

You can find a pair of example input and output files in rebiber/example_input.bib and rebiber/example_output.bib.

argument usage
-i or --input_bib. The path to the input bib file that you want to update
-o or --output_bib. The path to the output bib file that you want to save. If you don't specify any -o then it will be the same as the -i.
-r or --remove. A comma-seperated list of value names that you want to remove, such as "-r pages,editor,volume,month,url,biburl,address,publisher,bibsource,timestamp,doi". Empty by default.
-s or --shorten. A bool argument that is "False" by default, used for replacing booktitle with abbreviation in -a. Used as -s True.
-d or --deduplicate. A bool argument that is "True" by default, used for removing the duplicate bib entries sharing the same key. Used as -d True.
-l or --bib_list. The path to the list of the bib json files to be loaded. Check rebiber/bib_list.txt for the default file. Usually you don't need to set this argument.
-a or --abbr_tsv. The list of conference abbreviation data. Check rebiber/abbr.tsv for the default file. Usually you don't need to set this argument.
-u or --update. Update the local bib-related data with the lateset Github version.
-v or --version. Print the version of current Rebiber.

Example Input and Output

An example input entry with the arXiv information (from Google Scholar or somewhere):

@article{lin2020birds,
	title={Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models},
	author={Lin, Bill Yuchen and Lee, Seyeon and Khanna, Rahul and Ren, Xiang},
	journal={arXiv preprint arXiv:2005.00683},
	year={2020}
}

An example normalized output entry with the official information:

@inproceedings{lin2020birds,
    title = "{B}irds have four legs?! {N}umer{S}ense: {P}robing {N}umerical {C}ommonsense {K}nowledge of {P}re-{T}rained {L}anguage {M}odels",
    author = "Lin, Bill Yuchen  and
      Lee, Seyeon  and
      Khanna, Rahul  and
      Ren, Xiang",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.emnlp-main.557",
    doi = "10.18653/v1/2020.emnlp-main.557",
    pages = "6862--6868",
}

Supported Conferences

The bib_list.txt contains a list of converted json files of the official bib data. In this repo, we now support the full ACL anthology, i.e., all papers that are published at *CL conferences (ACL, EMNLP, NAACL, etc.) as well as workshops. Also, we support any conference proceedings that can be downloaded from DBLP, for example, ICLR2020.

The following conferences are supported and their bib/json files are in our data folder. You can turn each item on/off in bib_list.txt. Please feel free to create PR for adding new conferences following this!

Name Years
ACL Anthology (until 2021-01)
AAAI 2010 -- 2020
AISTATS 2013 -- 2020
ALENEX 2010 -- 2020
ASONAM 2010 -- 2019
BigDataConf 2013 -- 2019
BMVC 2010 -- 2020
CHI 2010 -- 2020
CIDR 2009 -- 2020
CIKM 2010 -- 2020
COLT 2000 -- 2020
CVPR 2000 -- 2020
ICASSP 2015 -- 2020
ICCV 2003 -- 2019
ICLR 2013 -- 2020
ICML 2000 -- 2020
IJCAI 2011 -- 2020
KDD 2010 -- 2020
MLSys 2019 -- 2020
MM 2016 -- 2020
NeurIPS 2000 -- 2020
RECSYS 2010 -- 2020
SDM 2010 -- 2020
SIGIR 2010 -- 2020
SIGMOD 2010 -- 2020
SODA 2010 -- 2020
STOC 2010 -- 2020
UAI 2010 -- 2020
WSDM 2008 -- 2020
WWW (The Web Conf) 2001 -- 2020

Thanks for Anton Tsitsulin's great work on collecting such a complete set bib files!

Adding a new conference

You can manually add any conferences from DBLP by downloading their bib files to our raw_data folder, and run a prepared script add_conf.sh.

Take ICLR2020 and ICLR2019 as an example:

  • Step 1: Go to DBLP
  • Step 2: Download the bib files, and put them here as raw_data/iclr2020.bib and raw_data/iclr2019.bib (name should be in the format as {conf_name}{year}.bib)
  • Step 3: Run script
bash add_conf.sh iclr 2019 2020

Contact

Please email [email protected] or create Github issues here if you have any questions or suggestions.

Comments
  • Some references are filtered by `load_bib_file`

    Some references are filtered by `load_bib_file`

    It's a great tools, but when I try to transfer my .bib file, which is generated by an application BibDesk, the references are filtered, here is a minimal example of my bib file.

    @inproceedings{zhang2019heterogeneous,
            author = {Zhang, Chuxu and Song, Dongjin and Huang, Chao and Swami, Ananthram and Chawla, Nitesh V},
            booktitle = {Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining},
            date-added = {2021-04-03 01:39:20 +0800},
            date-modified = {2021-04-03 01:44:13 +0800},
            keywords = {Recommender system, Graph Neural Network},
            pages = {793--803},
            title = {Heterogeneous graph neural network},
            year = {2019},
            Bdsk-Url-1 = {https://doi.org/10.1145/3292500.3330961}}
    

    I think this is due to load_bib_file. The last line of this reference contains {, so load_bib_file skipped this reference.

    However, in the BibtexParser, this kind of bib file can be recognized.

    opened by AyanoClarke 8
  • Deleted entry after using rebiber

    Deleted entry after using rebiber

    Hi,

    Thanks for the great tool! I faced an issue where an entry was deleted after using rebiber though. It's this one:

    @article{loon,
      title={Autonomous navigation of stratospheric balloons using reinforcement learning.},
      author={Marc G. Bellemare and Salvatore Candido and P. S. Castro and J. Gong and Marlos C. Machado and Subhodeep Moitra and Sameera S. Ponda and Ziyu Wang},
      journal={Nature},
      year={2020},
      volume={588 7836},
      pages={
              77-82
            }
    }
    

    Do you have any idea what could be wrong?

    opened by RaghuSpaceRajan 4
  • Comments in bib file are transformed into `@comments`

    Comments in bib file are transformed into `@comments`

    I come across two issues here:

    • [ ] Somehow the tool transforms my comments (ones starting with %) in bib file into @comment{} and places them at the head of the file;
    • [ ] All the bibs are by default organized in an alphabetic manner, is there a way (option) I can remain the order of bibs (and thus keep the comments where they are) .

    Great tool by the way :)

    opened by boredtylin 4
  • Adding arxiv URL when available

    Adding arxiv URL when available

    When a bib_entry is not found now the scripts checks if it is an arxiv entry. If that is the case the new script adds the field url = {https://arxiv.org/abs/<ID>} to the bibitex entry.

    help wanted 
    opened by nicola-decao 3
  • Handle @string

    Handle @string

    Nice tool! It seems that currently it doesn't handle @string in BibTeX. Any plan to add this feature?

    Example:

    @string{emnlp = "Empirical Methods in Natural Language Processing (EMNLP)"}
    
    @inproceedings{li2020efficient,
     title={Efficient One-Pass End-to-End Entity Linking for Questions},
     author={Li, Belinda Z. and Min, Sewon and Iyer, Srinivasan and Mehdad, Yashar and Yih, Wen-tau},
     booktitle=emnlp,
     year={2020}
    }
    
    enhancement normalization 
    opened by scottyih 3
  • Incomplete bib entry for conference

    Incomplete bib entry for conference

    Hello, I find that some papers accepted by some conferences (e.g. AAAI 2020) cannot be indexed. The reason might be that we can only download the first 1000 entries when the accepted papers are more than 1,000 from DBLP. Is there any way to address such problem? Thanks very much!

    opened by xiaosen-wang 2
  • Whether to consider providing  Python API ?

    Whether to consider providing Python API ?

    Although a scripting approach is provided, would you consider providing a Python API ?

    for example

    import rebiber
    str = '@article{lin2020birds,
    	title={Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models},
    	author={Lin, Bill Yuchen and Lee, Seyeon and Khanna, Rahul and Ren, Xiang},
    	journal={arXiv preprint arXiv:2005.00683},
    	year={2020}
    }'
    res = rebiber.trans(str)
    print(res)
    
    opened by SinclairCoder 2
  • Modified the demo Colab for copy and paste bib

    Modified the demo Colab for copy and paste bib

    This is notebook implemented a few modifications to the original Colab demo: (1) takes pasted BibTex as a string input (i.e. copying BibText from Google Scholar); (2) prints the processed BibTex on screen, which can be copied and pasted into reference management software; (3) modified the upload cell; (4) added a download cell to download processed outputs.

    opened by Herais 1
  • Update data from recent ICML/ICLR/NeurIPS/AAAI

    Update data from recent ICML/ICLR/NeurIPS/AAAI

    I've added data from NeurIPS 2020/2021, ICML 2021, ICLR 2021, and also missing entries from ICML 2020 and AAAI2019/2020, because they each have more than 1000 papers but currently each bib only contains the first 1000 entries.

    opened by shizhouxing 1
  • Fix bug with parsing arXiv entries with multiline fields

    Fix bug with parsing arXiv entries with multiline fields

    The bug is as follows. When the input is something like:

    @article{sharma19_paral_restar_spider, author = {Sharma, Pranay and Kafle, Swatantra and Khanduri, Prashant and Bulusu, Saikiran and Rajawat, Ketan and Varshney, Pramod K.}, title = {Parallel Restarted Spider -- Communication Efficient Distributed Nonconvex Optimization With Optimal Computation Complexity}, journal = {arXiv preprint arXiv:1912.06036}, year = 2019, url = {http://arxiv.org/abs/1912.06036v2}, archivePrefix = {arXiv}, primaryClass = {math.OC}, }

    It is changed to:

    @article{sharma19_paral_restar_spider, author = {Sharma, Pranay and Kafle, Swatantra and Khanduri, Prashan}, journal = {ArXiv preprint}, title = {Parallel Restarted Spider -- Communication Efficien}, url = {https://arxiv.org/abs/1912.06036}, volume = {abs/1912.06036}, year = {2019}

    Author information is lost and the title is shortened to just the first line. This happens because if the entry is an unmatched article from arXiv, the normalizer re-parses the bibtex entry manually looking for the title, author, and arXiv id information. This manual parser fails when any of the entries (title/author/etc.) spans multiple lines. Moreover, the external bibtex parsing library used elsewhere already works quite well. This commit just uses the external parser which can handle multiline fields just fine. Under this, we have the correct output:

    @article{sharma19_paral_restar_spider, author = {Sharma, Pranay and Kafle, Swatantra and Khanduri, Prashant and Bulusu, Saikiran and Rajawat, Ketan and Varshney, Pramod K.}, journal = {ArXiv preprint}, title = {Parallel Restarted Spider -- Communication Efficient Distributed Nonconvex Optimization With Optimal Computation Complexity}, url = {https://arxiv.org/abs/1912.06036}, volume = {abs/1912.06036}, year = {2019}

    opened by rka97 1
  • The way `abbr.tsv` is loaded removes entries from the file

    The way `abbr.tsv` is loaded removes entries from the file

    abbr.tsv has two entries for ICML:

    Proc. of ICML | Proceedings of the .* International Conference on Machine Learning
    Proc. of ICML | Machine Learning, Proceedings of the .* International Conference
    

    but they are not both loaded, because in load_abbr_tsv() a dictionary is used such that the second entry overwrites the first one:

    ls = line.split("|")
    if len(ls) == 2:
        abbr_dict[ls[0].strip()] = ls[1].strip()
    

    I see two solutions here: either don't load the file into a dictionary (but just a list of tuples), or allow specifying or regex patterns (i.e., (pattern1|pattern2)) which would require using a different character than | to separate the left- and right-hand sides in abbr.tsv.

    bug 
    opened by thomkeh 1
  • Question about month

    Question about month

    Hi Yuchen, It seems you try to ignore the month field in a bib entry in is_contain_var() and build_json(). Can you please explain why is that necessary? You also ignore '@string' entry. Why not just let bibtexparser parse the entire bib file? Thank you!

    opened by christophe-gu 0
  • Add new conference files

    Add new conference files

    Add entries for the following conferences (mostly machine learning conferences):

    ICML 2022 AISTATS 2022 COLT 2021 2022 ICLR 2022 MLSYS 2021 2022 NeurIPS 2021 UAI 2021

    opened by rka97 0
  • Hope add a command for batch files execution

    Hope add a command for batch files execution

    I have multiple bib files for several research fields and hope convert their information in one-click. I've written a bat file to automatically execute bib files in work directory:

    @echo off
    for %%i in (*.bib) do echo "%%i"
    for %%i in (*.bib) do rebiber -i %%i -o Pub%%i
    pause
    exit
    

    But a build-in command would be easier to use. Would you like to add this?

    opened by Saltsmart 0
  • The booktitle contains too much information

    The booktitle contains too much information

    I found that the booktitle of many papers in DBLP has too many names and information.

    For example:

    @inproceedings{seo-etal-2016-bidirectional,
     author = {Min Joon Seo and
    Aniruddha Kembhavi and
    Ali Farhadi and
    Hannaneh Hajishirzi},
     bibsource = {dblp computer science bibliography, https://dblp.org},
     biburl = {https://dblp.org/rec/conf/iclr/SeoKFH17.bib},
     booktitle = {5th International Conference on Learning Representations, {ICLR} 2017,
    Toulon, France, April 24-26, 2017, Conference Track Proceedings},
     publisher = {OpenReview.net},
     timestamp = {Thu, 25 Jul 2019 01:00:00 +0200},
     title = {Bidirectional Attention Flow for Machine Comprehension},
     url = {https://openreview.net/forum?id=HJ0UKP9ge},
     year = {2017}
    }
    

    The booktitle here contains the full name and abbreviation of ICLR, as well as their location. Can you keep only the first one of this information?

    For example: booktitle =“5th International Conference on Learning Representations”

    opened by AliceNEET 1
  • Confusing behavior with some author names

    Confusing behavior with some author names

    The ImageNet paper has its last author listed as Li Fei-Fei, which is how she publishes in general, both on the paper and in the IEEE metadata; their .bib has her as Li Fei-Fei in the author.

    The DBLP record lists her as Li Fei{-}Fei.

    And yet rebiber/data/cvpr2009.bib.json has her as Fei{-}Fei Li, and so running either through rebiber incorrectly changes it to that ordering.

    The same is true for most (but not all) of her papers in the database. No idea why this would be, since DBLP consistently has her as Li Fei{-}Fei.

    cc @pranav-ust

    opened by djsutherland 1
  • Add LREC + Automatically sync

    Add LREC + Automatically sync

    The LREC Sign Language workshop has this website - https://www.sign-lang.uni-hamburg.de/lrec/index.html

    Which links to two bib files: without abstracts: https://www.sign-lang.uni-hamburg.de/lrec/sign-lang_lrec.bib with abstracts: https://www.sign-lang.uni-hamburg.de/lrec/sign-lang_lrec_a.bib

    While one can add them manually to this repo, I was wondering if there is a setting somewhere to just put this link, and whenever someone runs an "update" script it will re-fetch the bib file and process it?

    opened by AmitMY 0
Releases(v1.1.3)
Owner
(Bill) Yuchen Lin
CS PhD student @ USC; NLP/AI/ML
(Bill) Yuchen Lin
Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG)

Indobenchmark Toolkit Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG) resources fo

Samuel Cahyawijaya 11 Aug 26, 2022
Statistics and Mathematics for Machine Learning, Deep Learning , Deep NLP

Stat4ML Statistics and Mathematics for Machine Learning, Deep Learning , Deep NLP This is the first course from our trio courses: Statistics Foundatio

Omid Safarzadeh 83 Dec 29, 2022
Simple GUI where you can enter an article and get a crisp summarized version.

Text-Summarization-using-TextRank-BART Simple GUI where you can enter an article and get a crisp summarized version. How to run: Clone the repo Instal

Rohit P 4 Sep 28, 2022
The official repository of the ISBI 2022 KNIGHT Challenge

KNIGHT The official repository holding the data for the ISBI 2022 KNIGHT Challenge About The KNIGHT Challenge asks teams to develop models to classify

Nicholas Heller 4 Jan 22, 2022
String Gen + Word Checker

Creates random strings and checks if any of them are a real words. Mostly a waste of time ngl but it is cool to see it work and the fact that it can generate a real random word within10sec

1 Jan 06, 2022
AI and Machine Learning workflows on Anthos Bare Metal.

Hybrid and Sovereign AI on Anthos Bare Metal Table of Contents Overview Terraform as IaC Substrate ABM Cluster on GCE using Terraform TensorFlow ResNe

Google Cloud Platform 8 Nov 26, 2022
DaCy: The State of the Art Danish NLP pipeline using SpaCy

DaCy: A SpaCy NLP Pipeline for Danish DaCy is a Danish preprocessing pipeline trained in SpaCy. At the time of writing it has achieved State-of-the-Ar

Kenneth Enevoldsen 71 Jan 06, 2023
SHAS: Approaching optimal Segmentation for End-to-End Speech Translation

SHAS: Approaching optimal Segmentation for End-to-End Speech Translation In this repo you can find the code of the Supervised Hybrid Audio Segmentatio

Machine Translation @ UPC 21 Dec 20, 2022
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
ProteinBERT is a universal protein language model pretrained on ~106M proteins from the UniRef90 dataset.

ProteinBERT is a universal protein language model pretrained on ~106M proteins from the UniRef90 dataset. Through its Python API, the pretrained model can be fine-tuned on any protein-related task in

241 Jan 04, 2023
topic modeling on unstructured data in Space news articles retrieved from the Guardian (UK) newspaper using API

NLP Space News Topic Modeling Photos by nasa.gov (1, 2, 3, 4, 5) and extremetech.com Table of Contents Project Idea Data acquisition Primary data sour

edesz 1 Jan 03, 2022
This repository contains examples of Task-Informed Meta-Learning

Task-Informed Meta-Learning This repository contains examples of Task-Informed Meta-Learning (paper). We consider two tasks: Crop Type Classification

10 Dec 19, 2022
To create a deep learning model which can explain the content of an image in the form of speech through caption generation with attention mechanism on Flickr8K dataset.

To create a deep learning model which can explain the content of an image in the form of speech through caption generation with attention mechanism on Flickr8K dataset.

Ragesh Hajela 0 Feb 08, 2022
This project converts your human voice input to its text transcript and to an automated voice too.

Human Voice to Automated Voice & Text Introduction: In this project, whenever you'll speak, it will turn your voice into a robot voice and furthermore

Hassan Shahzad 3 Oct 15, 2021
HAIS_2GNN: 3D Visual Grounding with Graph and Attention

HAIS_2GNN: 3D Visual Grounding with Graph and Attention This repository is for the HAIS_2GNN research project. Tao Gu, Yue Chen Introduction The motiv

Yue Chen 1 Nov 26, 2022
Count the frequency of letters or words in a text file and show a graph.

Word Counter By EBUS Coding Club Count the frequency of letters or words in a text file and show a graph. Requirements Python 3.9 or higher matplotlib

EBUS Coding Club 0 Apr 09, 2022
Deduplication is the task to combine different representations of the same real world entity.

Deduplication is the task to combine different representations of the same real world entity. This package implements deduplication using active learning. Active learning allows for rapid training wi

63 Nov 17, 2022
Kerberoast with ACL abuse capabilities

targetedKerberoast targetedKerberoast is a Python script that can, like many others (e.g. GetUserSPNs.py), print "kerberoast" hashes for user accounts

Shutdown 213 Dec 22, 2022
A cross platform OCR Library based on PaddleOCR & OnnxRuntime

A cross platform OCR Library based on PaddleOCR & OnnxRuntime

RapidOCR Team 767 Jan 09, 2023
MPNet: Masked and Permuted Pre-training for Language Understanding

MPNet MPNet: Masked and Permuted Pre-training for Language Understanding, by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu, is a novel pre-tr

Microsoft 228 Nov 21, 2022