🐍💯pySBD (Python Sentence Boundary Disambiguation) is a rule-based sentence boundary detection that works out-of-the-box.

Overview

PySBD logo

pySBD: Python Sentence Boundary Disambiguation (SBD)

Python package codecov License PyPi GitHub

pySBD - python Sentence Boundary Disambiguation (SBD) - is a rule-based sentence boundary detection module that works out-of-the-box.

This project is a direct port of ruby gem - Pragmatic Segmenter which provides rule-based sentence boundary detection.

pysbd_code

Highlights

'PySBD: Pragmatic Sentence Boundary Disambiguation' a short research paper got accepted into 2nd Workshop for Natural Language Processing Open Source Software (NLP-OSS) at EMNLP 2020.

Research Paper:

https://arxiv.org/abs/2010.09657

Recorded Talk:

pysbd_talk

Poster:

name

Install

Python

pip install pysbd

Usage

  • Currently pySBD supports 22 languages.
import pysbd
text = "My name is Jonas E. Smith. Please turn to p. 55."
seg = pysbd.Segmenter(language="en", clean=False)
print(seg.segment(text))
# ['My name is Jonas E. Smith.', 'Please turn to p. 55.']
import spacy
from pysbd.utils import PySBDFactory

nlp = spacy.blank('en')

# explicitly adding component to pipeline
# (recommended - makes it more readable to tell what's going on)
nlp.add_pipe(PySBDFactory(nlp))

# or you can use it implicitly with keyword
# pysbd = nlp.create_pipe('pysbd')
# nlp.add_pipe(pysbd)

doc = nlp('My name is Jonas E. Smith. Please turn to p. 55.')
print(list(doc.sents))
# [My name is Jonas E. Smith., Please turn to p. 55.]

Contributing

If you want to contribute new feature/language support or found a text that is incorrectly segmented using pySBD, then please head to CONTRIBUTING.md to know more and follow these steps.

  1. Fork it ( https://github.com/nipunsadvilkar/pySBD/fork )
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create a new Pull Request

Citation

If you use pysbd package in your projects or research, please cite PySBD: Pragmatic Sentence Boundary Disambiguation.

@inproceedings{sadvilkar-neumann-2020-pysbd,
    title = "{P}y{SBD}: Pragmatic Sentence Boundary Disambiguation",
    author = "Sadvilkar, Nipun  and
      Neumann, Mark",
    booktitle = "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.nlposs-1.15",
    pages = "110--114",
    abstract = "We present a rule-based sentence boundary disambiguation Python package that works out-of-the-box for 22 languages. We aim to provide a realistic segmenter which can provide logical sentences even when the format and domain of the input text is unknown. In our work, we adapt the Golden Rules Set (a language specific set of sentence boundary exemplars) originally implemented as a ruby gem pragmatic segmenter which we ported to Python with additional improvements and functionality. PySBD passes 97.92{\%} of the Golden Rule Set examplars for English, an improvement of 25{\%} over the next best open source Python tool.",
}

Credit

This project wouldn't be possible without the great work done by Pragmatic Segmenter team.

Comments
  • Question marks at the end swallowed

    Question marks at the end swallowed

    Looks like the example with just question marks is good now:

    >>> segmenter.segment("??")
    ['??']
    

    but the example with double question marks as a token at the end of a sentence still loses the question marks:

    >>> segmenter.segment("T stands for the vector transposition. As shown in Fig. ??")
    ['T stands for the vector transposition.', 'As shown in Fig.']
    

    looks like this is the minimal repro:

    >>> segmenter.segment("Fig. ??")
    ['Fig.']
    
    bug edge-cases 
    opened by dakinggg 11
  • Pysbd just hangs🐛

    Pysbd just hangs🐛

    Describe the bug The process hangs .

    To Reproduce Steps to reproduce the behavior: Input text - <f.302205302116302416302500302513915bd> flat = "f.302205302116302416302500302513915bd" print(flat) x=segClean = pysbd.Segmenter(language="en", clean=True, char_span=False) for z in x.segment(flat): print(z)

    Example: Input text - "My name is Jonas E. Smith. Please turn to p. 55."

    Expected behavior Return f.302205302116302416302500302513915

    Example: ['f.302205302116302416302500302513915bd']

    Additional context Add any other context about the problem here.

    help wanted 
    opened by kariato 8
  • Incorrect text span start and end returned

    Incorrect text span start and end returned

    Looks like something weird happening in this case, note that the indices of the second text span are incorrect:

    >>> seg = pysbd.Segmenter(language='en', clean=False, char_span=True)
    >>> seg.segment("1) The first item. 2) The second item.")                                                                                
    [TextSpan(sent='1) The first item.', start=0, end=18), TextSpan(sent='2) The second item.', start=0, end=19)] 
    
    bug 
    opened by dakinggg 7
  • Performance improvement?

    Performance improvement?

    I am not certain of this, but I suspect there might be room for performance improvement by using re.compile to precompile all of the needed regexs. Otherwise they will have to be compiled regularly (once the re cache of 100 has been exceeded)

    question 
    opened by dakinggg 7
  • Slovak lang support

    Slovak lang support

    We've added support for SBD in Slovak language text.

    Language specific improvements:

    • list of common slovak abbreviations
    • list of prepositive abbreviations
    • list of number abbreviations
    • handling of roman numerals
    • handling of „ text “ quotes, that are common in Slovak language
    • handling of ordinal numerals in dates, such as 17. Apríl 2020
    • modified the replacement of periods in abbreviations, so it can consistently handle common Slovak abbreviations such as Company Name s. r. o.
    • disabled processing of alphabetical lists, because of conflicts with some common abbreviations

    The code has been tested for stability on a very large corpus of web text. The has been no rigorous testing for segmentation quality, but the subjective feeling in the team is very positive.

    language 
    opened by misotrnka 6
  • Different segmentation with Spacy and when using pySBD directly

    Different segmentation with Spacy and when using pySBD directly

    Firstly thank you for this project - I was lucky to find it and it is really useful

    I seem to have found a case where the segmentation is behaving differently when run within the Spacy pipeline and when run using pySBD directly. I stumbled on it with my own text where a sentence after a previous sentence that was in quotes was being lumped together. I looked through the Golden Rules and found this wasn't expected and then noticed that even with the text in one of your tests it acts differently in Spacy.

    To reproduce run these two bits of code:

    from pysbd.utils import PySBDFactory
    nlp = spacy.blank('en')
    nlp.add_pipe(PySBDFactory(nlp))
    doc = nlp("She turned to him, \"This is great.\" She held the book out to show him.")
    for sent in doc.sents:
        print(str(sent).strip() + '\n')
    

    She turned to him, "This is great." She held the book out to show him.

    import pysbd
    text = "She turned to him, \"This is great.\" She held the book out to show him."
    seg = pysbd.Segmenter(language="en", clean=False)
    #print(seg.segment(text))
    for sent in seg.segment(text):
        print(str(sent).strip() + '\n')
    

    She turned to him, "This is great."

    She held the book out to show him.

    The second way is the desired output (based on the rules at least)

    bug help wanted 
    opened by nmstoker 6
  • destructive behaviour in edge-cases

    destructive behaviour in edge-cases

    As of v0.3.3, pySBD shows destructive behavior in some edge-cases even when setting the option clean to False. When dealing with OCR text, pySBD removes whitespace after multiple periods.

    To reproduce

    import pysbd
    
    splitter = pysbd.Segmenter(language="fr", clean=False)
    
    text = "Maissen se chargea du reste .. Logiquement,"
    print(splitter.segment(text))
    
    text = "Maissen se chargea du reste ... Logiquement,"
    print(splitter.segment(text))
    
    text = "Maissen se chargea du reste .... Logiquement,"
    print(splitter.segment(text))
    

    Actual output Please note the missing whitespace after the final period in the example with .. and .....

    ['Maissen se chargea du reste .', '.', 'Logiquement,']
    ['Maissen se chargea du reste ... ', 'Logiquement,']
    ['Maissen se chargea du reste .', '...', 'Logiquement,']
    

    Expected output

    ['Maissen se chargea du reste .', '. ', 'Logiquement,']
    ['Maissen se chargea du reste ... ', 'Logiquement,']
    ['Maissen se chargea du reste .', '... ', 'Logiquement,']
    

    In general, pySBD works well. Many thanks @nipunsadvilkar. I can also look into this as soon as I find some time and open a pull request.

    bug edge-cases 
    opened by aflueckiger 5
  • 🏎 ⚡️ 💯 [Rough] Benchmark across Segmentation Tools, Libraries and Algorithms

    🏎 ⚡️ 💯 [Rough] Benchmark across Segmentation Tools, Libraries and Algorithms

    Segmentation Tools, Libraries and Algorithms:

    • [x] Stanza
    • [x] syntok
    • [x] NLTK
    • [x] spaCy
    • [x] blingfire

    | Tool | Accuracy | Speed (ms) | |-----------|----------|------------| | blingfire | 75.00% | 49.91 | | pySBD | 97.92% | 2449.18 | | syntok | 68.75% | 783.73 | | spaCy | 52.08% | 473.96 | | stanza | 72.92% | 120803.37 | | NLTK | 56.25% | 342.98 |

    opened by nipunsadvilkar 5
  • ✨ 💫  Support Multiple languages

    ✨ 💫 Support Multiple languages

    Languages to be supported:

    • [x] English
    • [x] Bulgarian
    • [x] Spanish
    • [x] Russian
    • [x] Arabic
    • [x] Amharic
    • [x] Marathi
    • [x] Hindi
    • [x] Armenian
    • [x] Persian
    • [x] Urdu
    • [x] Polish
    • [x] Chinese
    • [x] Dutch
    • [x] Danish
    • [x] French
    • [x] Italian
    • [x] Greek
    • [x] Burmese
    • [x] Japanese
    • [x] Deutsch
    • [x] Kazakh
    enhancement 
    opened by nipunsadvilkar 4
  • Regexp issues

    Regexp issues

    I'm getting errors because the regexp engine interprets parentesis: "unterminated subpattern" and "unbalanced parenthesis".

    I'm analysing very large amounts of text, so not sure how these were triggered.

    opened by mollerhoj 4
  • Reduce some calls to re.sub

    Reduce some calls to re.sub

    So calls to re.compile are not a problem. The main thing slowing it down is lots of calls to re.sub in abbreviation_replacer.py. I reduced some of these calls which speeds it up by a factor of ~3-3.5x on my machine, for the specific (longish) document that I tested with. I also included the script I used to test timing. Given that you are much more familiar with the codebase, see if my changes look reasonable, but all the tests do still pass. There are probably some more ways to speed up the calls in that file.

    enhancement 
    opened by dakinggg 4
  • How is accuracy on OPUS-100 computed?

    How is accuracy on OPUS-100 computed?

    Hi! Thanks for this library.

    Since there is no notion of documents in the OPUS-100 dataset it is not clear to me how accuracy is computed. I tried a naive approach using pairwise joining of sentences:

    from datasets import load_dataset
    import pysbd
    
    if __name__ == "__main__":
        sentences = [
            sample["de"].strip()
            for sample in load_dataset("opus100", "de-en", split="test")["translation"]
        ]
    
        correct = 0
        total = 0
    
        segmenter = pysbd.Segmenter(language="de")
    
        for sent1, sent2 in zip(sentences, sentences[1:]):
            out = tuple(
                s.strip() for s in segmenter.segment(sent1 + " " + sent2)
            )
    
            total += 1
    
            if out == (sent1, sent2):
                correct += 1
    
        print(f"{correct}/{total} = {correct / total}")
    

    But I get 1011/1999 = 50.6% Accuracy which is not close to the 80.95% Accuracy reported in the paper.

    Thanks for any help!

    opened by bminixhofer 1
  • Added decorator as required by latest SpaCy

    Added decorator as required by latest SpaCy

    Hello!

    In using pySBD, I've noticed that the current example script no longer works with the latest version of SpaCy (3.3.0). This is the traceback I get:

    Traceback (most recent call last):
      File "/Users/lucas/Code/significant-statements-extraction/scripts/test_pysbd.py", line 27, in <module>
        nlp.add_pipe(pysbd_sentence_boundaries)
      File "/Users/lucas/miniforge3/envs/pytorch_p39/lib/python3.9/site-packages/spacy/language.py", line 773, in add_pipe
        raise ValueError(err)
    ValueError: [E966] `nlp.add_pipe` now takes the string name of the registered component factory, not a callable component. Expected string, but got <function pysbd_sentence_boundaries at 0x11ffa9160> (name: 'None').
    
    - If you created your component with `nlp.create_pipe('name')`: remove nlp.create_pipe and call `nlp.add_pipe('name')` instead.
    
    - If you passed in a component like `TextCategorizer()`: call `nlp.add_pipe` with the string name instead, e.g. `nlp.add_pipe('textcat')`.
    
    - If you're using a custom component: Add the decorator `@Language.component` (for function components) or `@Language.factory` (for class components / factories) to your custom component and assign it a name, e.g. `@Language.component('your_name')`. You can then run `nlp.add_pipe('your_name')` to add it to the pipeline.
    

    This pull requests add a @Language.component decorator to make pySBD available in SpaCy again.

    opened by soldni 0
  • Arabic sentence split on the Arabic comma

    Arabic sentence split on the Arabic comma

    Describe the bug Arabic sentence split on the Arabic comma.

    To Reproduce Steps to reproduce the behavior:

    import pysbd
    text = "هذه تجربة، للغة العربية"
    seg = pysbd.Segmenter(language="ar", clean=True)
    >>> print(seg.segment(text))
    

    Output: ['هذه تجربة،', 'للغة العربية']

    Expected behavior The text should not be split on the Arabic comma. Expected output: ['هذه تجربة، للغة العربية']

    Additional context I locally fixed it by modifying the file: pysbd/lang/arabic.py, deleting ، from SENTENCE_BOUNDARY_REGEX.

    opened by ymoslem 0
  • Does pysbd delete sentences after detection ?

    Does pysbd delete sentences after detection ?

    Hey there, So ive been using pysbd to detect boundries in hindi and marathi language and then save the same data rearranged from a paragraph to one sentence boundry per sample. Unfortunately the storage size has gone down from 22GB to 14.5 GB after just detecting boundries and just saving them per sentence. and yes i did turn off the clean args.

    opened by StephennFernandes 0
  • Update pysbd_as_spacy_component.py

    Update pysbd_as_spacy_component.py

    Thanks for a great sentence splitting package. A small contribution, after troubleshooting, why the code was not working out of the box. The spacy v3 requires a string in the add_pipe() call. The component need to be declared using the language decorator. See also https://spacy.io/usage/processing-pipelines#custom-components. Hope it helps other users.

    opened by guebeln0 0
Releases(v0.3.4)
  • v0.3.4(Feb 11, 2021)

  • v0.3.3(Oct 8, 2020)

  • v0.3.2(Sep 11, 2020)

  • v0.3.1(Aug 11, 2020)

  • v0.3.0(Aug 11, 2020)

    v0.3.0

    • ✨ 💫 Support Multiple languages - #2
    • 🏎⚡️💯 Benchmark across Segmentation Tools, Libraries and Algorithms
    • 🎨 ♻️ Update sentence char_span logic
    • ⚡️ Performance improvements - #41
    • ♻️🐛 Refactor AbbreviationReplacer
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0rc(Jun 9, 2020)

    • ✨ 💫 sent char_span through with spaCy & regex approach - #63
    • ♻️ Refactoring to support multiple languages
    • ✨ 💫Initial language support for - Hindi, Marathi, Chinese, Spanish
    • ✅ Updated tests - more coverage & regression tests for issues
    • 👷👷🏻‍♀️ GitHub actions for CI-CD
    • 💚☂️ Add code coverage - coverage.py Add Codecov
    • 🐛 Fix incorrect text span & vanilla pysbd vs spacy output discrepancy - #49, #53, #55 , #59
    • 🐛 Fix NUMBERED_REFERENCE_REGEX for zero or one time - #58
    • 🔐Fix security vulnerability bleach - #62
    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(Nov 13, 2019)

  • v0.2.2(Nov 1, 2019)

  • v0.2.1(Oct 30, 2019)

  • v0.2.0(Oct 25, 2019)

    • ✨Add char_span parameter (optional) to get sentence & its (start, end) char offsets from original text
    • ✨pySBD as a spaCy component example
    • 🐛 Fix double question mark swallow bug - #39
    Source code(tar.gz)
    Source code(zip)
  • v0.1.5(Oct 24, 2019)

  • v0.1.4(Oct 20, 2019)

    • ✨ ✅ Handle intermittent punctuations added special case: r"[。..!!?].*" to handle intermittent dots, exclaimation, etc. special cases group can be updated as per developer needs- #34
    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Oct 19, 2019)

    • 🐛 Fix lists_item_replacer - #29
    • 🐛 Fix & ♻️ refactor replace_multi_period_abbreviations - #30
    • 🐛 Fix abbreviation_replacer - #31
    • ✅ Add regression tests for issues
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Oct 18, 2019)

  • v0.1.1(Oct 9, 2019)

Owner
Nipun Sadvilkar
I like to explore Jungle of Data with Python as my swiss knife with pandas, numpy, matplotlib and scikit-learn as its multi-tools😅
Nipun Sadvilkar
Code for the paper "Flexible Generation of Natural Language Deductions"

Code for the paper "Flexible Generation of Natural Language Deductions"

Kaj Bostrom 12 Nov 11, 2022
Local cross-platform machine translation GUI, based on CTranslate2

DesktopTranslator Local cross-platform machine translation GUI, based on CTranslate2 Download Windows Installer You can either download a ready-made W

Yasmin Moslem 29 Jan 05, 2023
Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2.

Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2. It is trained (finetuned) on a curated list of approximately 45K Python (~470MB) files gathered from the

Galois Autocompleter 91 Sep 23, 2022
Python functions for summarizing and improving voice dictation input.

Helpmespeak Help me speak uses Python functions for summarizing and improving voice dictation input. Get started with OpenAI gpt-3 OpenAI is a amazing

Margarita Humanitarian Foundation 6 Dec 17, 2022
NLP Core Library and Model Zoo based on PaddlePaddle 2.0

PaddleNLP 2.0拥有丰富的模型库、简洁易用的API与高性能的分布式训练的能力,旨在为飞桨开发者提升文本建模效率,并提供基于PaddlePaddle 2.0的NLP领域最佳实践。

6.9k Jan 01, 2023
IMS-Toucan is a toolkit to train state-of-the-art Speech Synthesis models

IMS-Toucan is a toolkit to train state-of-the-art Speech Synthesis models. Everything is pure Python and PyTorch based to keep it as simple and beginner-friendly, yet powerful as possible.

Digital Phonetics at the University of Stuttgart 247 Jan 05, 2023
A Python script which randomly chooses and prints a file from a directory.

___ ____ ____ _ __ ___ / _ \ | _ \ | _ \ ___ _ __ | '__| / _ \ | |_| || | | || | | | / _ \| '__| | | | __/ | _ || |_| || |_| || __

yesmaybenookay 0 Aug 06, 2021
Train and use generative text models in a few lines of code.

blather Train and use generative text models in a few lines of code. To see blather in action check out the colab notebook! Installation Use the packa

Dan Carroll 16 Nov 07, 2022
InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective

InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective This is the official code base for our ICLR 2021 paper

AI Secure 71 Nov 25, 2022
Yet another Python binding for fastText

pyfasttext Warning! pyfasttext is no longer maintained: use the official Python binding from the fastText repository: https://github.com/facebookresea

Vincent Rasneur 230 Nov 16, 2022
Awesome Treasure of Transformers Models Collection

💁 Awesome Treasure of Transformers Models for Natural Language processing contains papers, videos, blogs, official repo along with colab Notebooks. 🛫☑️

Ashish Patel 577 Jan 07, 2023
apple's universal binaries BUT MUCH WORSE (PRACTICAL SHITPOST) (NOT PRODUCTION READY)

hyperuniversality investment opportunity: what if we could run multiple architectures in a single file, again apple universal binaries, but worse how

luna 2 Oct 19, 2021
Random-Word-Generator - Generates meaningful words from dictionary with given no. of letters and words.

Random Word Generator Generates meaningful words from dictionary with given no. of letters and words. This might be useful for generating short links

Mohammed Rabil 1 Jan 01, 2022
a chinese segment base on crf

Genius Genius是一个开源的python中文分词组件,采用 CRF(Conditional Random Field)条件随机场算法。 Feature 支持python2.x、python3.x以及pypy2.x。 支持简单的pinyin分词 支持用户自定义break 支持用户自定义合并词

duanhongyi 237 Nov 04, 2022
Fast, general, and tested differentiable structured prediction in PyTorch

Torch-Struct: Structured Prediction Library A library of tested, GPU implementations of core structured prediction algorithms for deep learning applic

HNLP 1.1k Dec 16, 2022
Vad-sli-asr - A Python scripts for a speech processing pipeline with Voice Activity Detection (VAD)

VAD-SLI-ASR Python scripts for a speech processing pipeline with Voice Activity

Dynamics of Language 14 Dec 09, 2022
Mysticbbs-rjam - rJAM splitscreen message reader for MysticBBS A46+

rJAM splitscreen message reader for MysticBBS A46+

Robbert Langezaal 4 Nov 22, 2022
SAVI2I: Continuous and Diverse Image-to-Image Translation via Signed Attribute Vectors

SAVI2I: Continuous and Diverse Image-to-Image Translation via Signed Attribute Vectors [Paper] [Project Website] Pytorch implementation for SAVI2I. We

Qi Mao 44 Dec 30, 2022
Use PaddlePaddle to reproduce the paper:mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer

MT5_paddle Use PaddlePaddle to reproduce the paper:mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer English | 简体中文 mT5: A Massively

2 Oct 17, 2021
Telegram AI chat bot written in Python using Pyrogram

Aurora_Al Just another Telegram AI chat bot written in Python using Pyrogram. A public running instance can be found on telegram as @AuroraAl. Require

♗CσNϙUҽRσR_MҽSƙEƚҽҽR 1 Oct 31, 2021