Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics.

Overview

Jury

Python versions downloads PyPI version Latest Release Open in Colab
Build status Dependencies Code style: black License: MIT

Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics. Jury offers a smooth and easy-to-use interface. It uses datasets for underlying metric computation, and hence adding custom metric is easy as adopting datasets.Metric.

Main advantages that Jury offers are:

  • Easy to use for any NLG system.
  • Calculate many metrics at once.
  • Metrics calculations are handled concurrently to save processing time.
  • It supports evaluating multiple predictions.

To see more, check the official Jury blog post.

Installation

Through pip,

pip install jury

or build from source,

git clone https://github.com/obss/jury.git
cd jury
python setup.py install

Usage

API Usage

It is only two lines of code to evaluate generated outputs.

from jury import Jury

jury = Jury()

# Microsoft translator translation for "Yurtta sulh, cihanda sulh." (16.07.2021)
predictions = ["Peace in the dormitory, peace in the world."]
references = ["Peace at home, peace in the world."]
scores = jury.evaluate(predictions, references)

Specify metrics you want to use on instantiation.

jury = Jury(metrics=["bleu", "meteor"])
scores = jury.evaluate(predictions, references)

CLI Usage

You can specify predictions file and references file paths and get the resulting scores. Each line should be paired in both files.

jury eval --predictions /path/to/predictions.txt --references /path/to/references.txt --reduce_fn max

If you want to specify metrics, and do not want to use default, specify it in config file (json) in metrics key.

{
  "predictions": "/path/to/predictions.txt",
  "references": "/path/to/references.txt",
  "reduce_fn": "max",
  "metrics": [
    "bleu",
    "meteor"
  ]
}

Then, you can call jury eval with config argument.

jury eval --config path/to/config.json

Custom Metrics

You can use custom metrics with inheriting jury.metrics.Metric, you can see current metrics on datasets/metrics. The code snippet below gives a brief explanation.

from jury.metrics import Metric

CustomMetric(Metric):
    def compute(self, predictions, references):
        pass

Contributing

PRs are welcomed as always :)

Installation

git clone https://github.com/obss/jury.git
cd jury
pip install -e .[develop]

Tests

To tests simply run.

python tests/run_tests.py

Code Style

To check code style,

python tests/run_code_style.py check

To format codebase,

python tests/run_code_style.py format

License

Licensed under the MIT License.

Comments
  • Facing datasets error

    Facing datasets error

    Hello, After dowloading the contents from git and instantiating the object, i get this error :-

    /content/image-captioning-bottom-up-top-down
    Traceback (most recent call last):
      File "eval.py", line 11, in <module>
       from jury import Jury 
      File "/usr/local/lib/python3.7/dist-packages/jury/__init__.py", line 1, in <module>
        from jury.core import Jury
      File "/usr/local/lib/python3.7/dist-packages/jury/core.py", line 6, in <module>
        from jury.metrics import EvaluationInstance, Metric, load_metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/__init__.py", line 1, in <module>
        from jury.metrics._core import (
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/__init__.py", line 1, in <module>
        from jury.metrics._core.auto import AutoMetric, load_metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/auto.py", line 23, in <module>
        from jury.metrics._core.base import Metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/base.py", line 28, in <module>
        from datasets.utils.logging import get_logger
    ModuleNotFoundError: No module named 'datasets.utils'; 'datasets' is not a package
    

    Can you please check what could be the issue

    opened by amit0623 8
  • CLI Implementation

    CLI Implementation

    CLI implementation for the package the read from txt files.

    Draft Usage: jury evaluate --predictions predictions.txt --references references.txt

    NLGEval uses single prediction and multiple references in a way that u specify multiple references.txt files for mulitple references, and like this on API.

    My idea is to have a single prediction and refenence file including multiple predictions or multiple references. In a single txt file, maybe we can use some sort of special separator like "<sep>" instead of a special char like [",", ";", ":", "\t"] maybe tab seperated would be OK. Wdyt ? @fcakyon @cemilcengiz

    help wanted discussion 
    opened by devrimcavusoglu 5
  • BLEU: ndarray reshape error

    BLEU: ndarray reshape error

    Hey, when computing BLEU score (snippet), facing reshape error in _compute_single_pred_single_ref.

    Could you assist with the same.

    from jury import Jury
    
    scorer = Jury()
    
    # [2, 5/5]
    p = [
            ['dummy text', 'dummy text', 'dummy text', 'dummy text', 'dummy text'],
            ['dummy text', 'dummy text', 'dummy text', 'dummy text', 'dummy text']
        ]
    
    # [2, 4/2]
    r = [['be looking for a certain office in the building ',
          ' ask the elevator operator for directions ',
          ' be a trained detective ',
          ' be at the scene of a crime'],
         ['leave the room ',
          ' transport the notebook']]
    
    scores = scorer(predictions=p, references=r)
    

    Output:

    Traceback (most recent call last):
      File "/home/axe/Projects/VisComSense/del.py", line 22, in <module>
        scores = scorer(predictions=p, references=r)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/core.py", line 78, in __call__
        score = self._compute_single_score(inputs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/core.py", line 137, in _compute_single_score
        score = metric.compute(predictions=predictions, references=references, reduce_fn=reduce_fn)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/datasets/metric.py", line 404, in compute
        output = self._compute(predictions=predictions, references=references, **kwargs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/_core/base.py", line 325, in _compute
        result = self.evaluate(predictions=predictions, references=references, reduce_fn=reduce_fn, **eval_params)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 241, in evaluate
        return eval_fn(predictions=predictions, references=references, reduce_fn=reduce_fn, **kwargs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 195, in _compute_multi_pred_multi_ref
        score = self._compute_single_pred_multi_ref(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 176, in _compute_single_pred_multi_ref
        return self._compute_single_pred_single_ref(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 165, in _compute_single_pred_single_ref
        predictions = predictions.reshape(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/collator.py", line 35, in reshape
        return Collator(_seq.reshape(args).tolist(), keep=True)
    ValueError: cannot reshape array of size 20 into shape (10,)
    
    Process finished with exit code 1
    
    bug 
    opened by Axe-- 4
  • Understanding BLEU Score ('bleu_n')

    Understanding BLEU Score ('bleu_n')

    Hey, how are different bleu scores calculated?

    For the give snippet, why are all bleu(n) scores identical? And how does this relate to nltk's sentence_bleu (weights) ?

    from jury import Jury
    
    scorer = Jury()
    predictions = [
        ["the cat is on the mat", "There is cat playing on the mat"], 
        ["Look!    a wonderful day."]
    ]
    references = [
        ["the cat is playing on the mat.", "The cat plays on the mat."], 
        ["Today is a wonderful day", "The weather outside is wonderful."]
    ]
    scores = scorer(predictions=predictions, references=references)
    
    

    Output:

    {'empty_predictions': 0,
     'total_items': 2,
     'bleu_1': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_2': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_3': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_4': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'meteor': {'score': 0.5420511682934044},
     'rouge': {'rouge1': 0.7783882783882783,
      'rouge2': 0.5925324675324675,
      'rougeL': 0.7426739926739926,
      'rougeLsum': 0.7426739926739926}}
    
    
    bug 
    opened by Axe-- 4
  • Computing BLEU more than once

    Computing BLEU more than once

    Hey, why does computing the BLEU score more than once, affect the key value of the score dict. e.g. 'bleu_1', 'bleu_1_1', 'bleu_1_1_1'

    Overall I find the library quite user-friendly, but unsure about this behavior.

    opened by Axe-- 4
  • New metrics structure completed.

    New metrics structure completed.

    New metrics structure allows user to create and define params for metrics as desired. Current metric classes in metrics/ can be extended or completely new custom metric can be defined inheriting jury.metrics.Metric.

    patch 
    opened by devrimcavusoglu 3
  • Fixed warning message in BLEURT default initialization

    Fixed warning message in BLEURT default initialization

    Jury constructor accepts metrics as a string, an object from Metric class or list of metric configurations inside a dict. In addition, BLEURT metric checks for config_namekey instead of checkpoint key. Thus, this warning message misleads if default model is not used.

    Here is an example of incorrect initialization and warning message:

    Screen Shot 2022-05-16 at 15 43 06

    checkpoint is ignored: Screen Shot 2022-05-16 at 15 42 55

    opened by zafercavdar 1
  • Fix Reference Structure for Basic BLEU calculation

    Fix Reference Structure for Basic BLEU calculation

    The wrapped function expects a slightly different reference structure than the one we give in the Single Ref-Pred method. A small structure change fixes the issue.

    Fixes #72

    opened by Sophylax 1
  • Bug: Metric object and string cannot be used together in input.

    Bug: Metric object and string cannot be used together in input.

    Currently, jury allows usage of input metrics to be passed in Jury(metrics=metrics) to be either list of jury.metrics.Metric or str, but it does not allow to use both str and Metric object together as,

    from jury import Jury
    from jury.metrics import load_metric
    
    metrics = ["bleu", load_metric("meteor")]
    jury = Jury(metrics=metrics)
    

    raises an error as metrics parameter expects a NestedSingleType of object which is either list<str> or list<jury.metrics.Metric.

    opened by devrimcavusoglu 1
  • BLEURT is failing to produce results

    BLEURT is failing to produce results

    I was trying to check with the same example mentioned in the readme file for Bleurt. It is failing by throwing an error. Please let me know the issue.

    Error :

    ImportError                               Traceback (most recent call last)
    <ipython-input-16-ed14e2ab4c7e> in <module>
    ----> 1 bleurt = Bleurt.construct()
          2 score = bleurt.compute(predictions=predictions, references=references)
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\auxiliary.py in construct(cls, task, resulting_name, compute_kwargs, **kwargs)
         99         subclass = cls._get_subclass()
        100         resulting_name = resulting_name or cls._get_path()
    --> 101         return subclass._construct(resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        102 
        103     @classmethod
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in _construct(cls, resulting_name, compute_kwargs, **kwargs)
        235         cls, resulting_name: Optional[str] = None, compute_kwargs: Optional[Dict[str, Any]] = None, **kwargs
        236     ):
    --> 237         return cls(resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        238 
        239     @staticmethod
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in __init__(self, resulting_name, compute_kwargs, **kwargs)
        220     def __init__(self, resulting_name: Optional[str] = None, compute_kwargs: Optional[Dict[str, Any]] = None, **kwargs):
        221         compute_kwargs = self._validate_compute_kwargs(compute_kwargs)
    --> 222         super().__init__(task=self._task, resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        223 
        224     def _validate_compute_kwargs(self, compute_kwargs: Dict[str, Any]) -> Dict[str, Any]:
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in __init__(self, task, resulting_name, compute_kwargs, config_name, keep_in_memory, cache_dir, num_process, process_id, seed, experiment_id, max_concurrent_cache_files, timeout, **kwargs)
        100         self.resulting_name = resulting_name if resulting_name is not None else self.name
        101         self.compute_kwargs = compute_kwargs or {}
    --> 102         self.download_and_prepare()
        103 
        104     @abstractmethod
    
    ~\anaconda3\lib\site-packages\evaluate\module.py in download_and_prepare(self, download_config, dl_manager)
        649             )
        650 
    --> 651         self._download_and_prepare(dl_manager)
        652 
        653     def _download_and_prepare(self, dl_manager):
    
    ~\anaconda3\lib\site-packages\jury\metrics\bleurt\bleurt_for_language_generation.py in _download_and_prepare(self, dl_manager)
        120         global bleurt
        121         try:
    --> 122             from bleurt import score
        123         except ModuleNotFoundError:
        124             raise ModuleNotFoundError(
    
    ImportError: cannot import name 'score' from 'bleurt' (unknown location)
    
    opened by Santhanreddy71 4
  • Prism support for use_cuda option

    Prism support for use_cuda option

    Referring this issue https://github.com/thompsonb/prism/issues/13, since it seems like no activate maintanance is going on, we can add this support on a public fork.

    enhancement 
    opened by devrimcavusoglu 0
  • Add support for custom tokenizer for BLEU

    Add support for custom tokenizer for BLEU

    Due to the nature of the Jury API, all input strings must be a whole (not tokenized), the current implementation of BLEU score is tokenized by white spaces. However, one might want results for smaller tokens, morphemes, or even character level rather than BLEU score of the words. Thus, it'd be great to support this with adding a support for tokenizer in the score computation for BLEU.

    enhancement help wanted 
    opened by devrimcavusoglu 0
Releases(2.2.3)
  • 2.2.3(Dec 26, 2022)

    What's Changed

    • flake8 error on python3.7 by @devrimcavusoglu in https://github.com/obss/jury/pull/118
    • Seqeval typo fix by @devrimcavusoglu in https://github.com/obss/jury/pull/117
    • Refactored requirements (sklearn). by @devrimcavusoglu in https://github.com/obss/jury/pull/121

    Full Changelog: https://github.com/obss/jury/compare/2.2.2...2.2.3

    Source code(tar.gz)
    Source code(zip)
  • 2.2.2(Sep 30, 2022)

    What's Changed

    • Migrating to evaluate package (from datasets). by @devrimcavusoglu in https://github.com/obss/jury/pull/116

    Full Changelog: https://github.com/obss/jury/compare/2.2.1...2.2.2

    Source code(tar.gz)
    Source code(zip)
  • 2.2.1(Sep 21, 2022)

    What's Changed

    • Fixed warning message in BLEURT default initialization by @zafercavdar in https://github.com/obss/jury/pull/110
    • ZeroDivisionError on precision and recall values. by @devrimcavusoglu in https://github.com/obss/jury/pull/112
    • validators added to the requirements. by @devrimcavusoglu in https://github.com/obss/jury/pull/113
    • Intermediate patch, fixes, updates. by @devrimcavusoglu in https://github.com/obss/jury/pull/114

    New Contributors

    • @zafercavdar made their first contribution in https://github.com/obss/jury/pull/110

    Full Changelog: https://github.com/obss/jury/compare/2.2...2.2.1

    Source code(tar.gz)
    Source code(zip)
  • 2.2(Mar 29, 2022)

    What's Changed

    • Fix Reference Structure for Basic BLEU calculation by @Sophylax in https://github.com/obss/jury/pull/74
    • Added BLEURT. by @devrimcavusoglu in https://github.com/obss/jury/pull/78
    • README.md updated with doi badge and citation inforamtion. by @devrimcavusoglu in https://github.com/obss/jury/pull/81
    • Add VSCode Folder to Gitignore by @Sophylax in https://github.com/obss/jury/pull/82
    • Change one BERTScore test Device to CPU by @Sophylax in https://github.com/obss/jury/pull/84
    • Add Prism metric by @devrimcavusoglu in https://github.com/obss/jury/pull/79
    • Update issue templates by @devrimcavusoglu in https://github.com/obss/jury/pull/85
    • Dl manager rework by @devrimcavusoglu in https://github.com/obss/jury/pull/86
    • Nltk upgrade by @devrimcavusoglu in https://github.com/obss/jury/pull/88
    • CER metric implementation. by @devrimcavusoglu in https://github.com/obss/jury/pull/90
    • Prism checkpoint URL updated. by @devrimcavusoglu in https://github.com/obss/jury/pull/92
    • Test cases refactored. by @devrimcavusoglu in https://github.com/obss/jury/pull/96
    • Added BARTScore by @Sophylax in https://github.com/obss/jury/pull/89
    • License information added for prism and bleurt. by @devrimcavusoglu in https://github.com/obss/jury/pull/97
    • Remove Unused Imports by @Sophylax in https://github.com/obss/jury/pull/98
    • Added WER metric. by @devrimcavusoglu in https://github.com/obss/jury/pull/103
    • Add TER metric by @devrimcavusoglu in https://github.com/obss/jury/pull/104
    • CHRF metric added. by @devrimcavusoglu in https://github.com/obss/jury/pull/105
    • Add comet by @devrimcavusoglu in https://github.com/obss/jury/pull/107
    • Doc refactor by @devrimcavusoglu in https://github.com/obss/jury/pull/108
    • Pypi fix by @devrimcavusoglu in https://github.com/obss/jury/pull/109

    New Contributors

    • @Sophylax made their first contribution in https://github.com/obss/jury/pull/74

    Full Changelog: https://github.com/obss/jury/compare/2.1.5...2.2

    Source code(tar.gz)
    Source code(zip)
  • 2.1.5(Dec 23, 2021)

    What's Changed

    • Bug fix: Typo corrected in _remove_empty() in core.py. by @devrimcavusoglu in https://github.com/obss/jury/pull/67
    • Metric name path bug fix. by @devrimcavusoglu in https://github.com/obss/jury/pull/69

    Full Changelog: https://github.com/obss/jury/compare/2.1.4...2.1.5

    Source code(tar.gz)
    Source code(zip)
  • 2.1.4(Dec 6, 2021)

    What's Changed

    • Handle for empty predictions & references on Jury (skipping empty). by @devrimcavusoglu in https://github.com/obss/jury/pull/65

    Full Changelog: https://github.com/obss/jury/compare/2.1.3...2.1.4

    Source code(tar.gz)
    Source code(zip)
  • 2.1.3(Dec 1, 2021)

    What's Changed

    • Bug fix: Bleu reshape error fixed. by @devrimcavusoglu in https://github.com/obss/jury/pull/63

    Full Changelog: https://github.com/obss/jury/compare/2.1.2...2.1.3

    Source code(tar.gz)
    Source code(zip)
  • 2.1.2(Nov 14, 2021)

    What's Changed

    • Bug fix: bleu returning same score with different max_order is fixed. by @devrimcavusoglu in https://github.com/obss/jury/pull/59
    • nltk version upgraded as >=3.6.4 (from >=3.6.2). by @devrimcavusoglu in https://github.com/obss/jury/pull/61

    Full Changelog: https://github.com/obss/jury/compare/2.1.1...2.1.2

    Source code(tar.gz)
    Source code(zip)
  • 2.1.1(Nov 10, 2021)

    What's Changed

    • Seqeval: json normalization added. by @devrimcavusoglu in https://github.com/obss/jury/pull/55
    • Read support from folders by @devrimcavusoglu in https://github.com/obss/jury/pull/57

    Full Changelog: https://github.com/obss/jury/compare/2.1.0...2.1.1

    Source code(tar.gz)
    Source code(zip)
  • 2.1.0(Oct 25, 2021)

    What's New 🚀

    Tasks 📝

    We added task based new metric system which allows us to evaluate different type of inputs rather than old system which could only evaluate from strings (generated text) for only language generation tasks. Hence, jury now is able to support broader set of metrics works with different types of input.

    With this, on jury.Jury API, the consistency of set of tasks given is under control. Jury will raise an error if any pair of metrics are not consistent with each other in terms of task (evaluation input).

    AutoMetric ✨

    • AutoMetric is introduced as a main factory class for automatically loading metrics, as a side note load_metric is still available for backward compatibility and is preferred (it uses AutoMetric under the hood).
    • Tasks are now distinguished within metrics. For example, precision can be used for language-generation or sequence-classification task, where one evaluates from string (generated text) while other one evaluates from integers (class labels).
    • On configuration file, metrics can be now stated with HuggingFace's datasets' metrics initializiation parameters. The keyword arguments for metrics that are used on computation are now separated in "compute_kwargs" key.

    Full Changelog: https://github.com/obss/jury/compare/2.0.0...2.1.0

    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Oct 11, 2021)

    Jury 2.0.0 is out 🎉🥳

    New Metric System

    • datasets package Metric implementation is adopted (and extended) to provide high performance 💯 and more unified interface 🤗.
    • Custom metric implementation changed accordingly (it now requires 3 abstract methods to be implemented).
    • Jury class is now callable (implements call() method to be used thoroughly) though evaluate() method is still available for backward compatibility.
    • In the usage of evaluate of Jury, predictions and references parameters are restricted to be passed as keyword arguments to prevent confusion/wrong computations (like datasets' metrics).
    • MetricCollator is removed, the methods for metrics are attached directly to Jury class. Now, metric addition and removal can be performed from a Jury instance directly.
    • Jury now supports reading metrics from string, list and dictionaries. It is more generic to input type of metrics given along with parameters.

    New metrics

    • Accuracy, F1, Precision, Recall are added to Jury metrics.
    • All metrics on datasets package are still available on jury through the use of jury.load_metric()

    Development

    • Test cases are improved with fixtures, and test structure is enchanced.
    • Expected outputs are now required for tests as a json with proper name.
    Source code(tar.gz)
    Source code(zip)
  • 1.1.2(Sep 15, 2021)

  • 1.1.1(Aug 15, 2021)

    • Malfunctioning multiple prediction calculation caused by multiple reference input for BLEU and SacreBLEU is fixed.
    • CLI Implementation is completed. 🎉
    Source code(tar.gz)
    Source code(zip)
  • 1.0.1(Aug 13, 2021)

  • 1.0.0(Aug 9, 2021)

    Release Notes

    • New metric structure is completed.
      • Custom metric support is improved and no longer required to extend datasets.Metric, rather uses jury.metrics.Metric.
      • Metric usage is unified with compute, preprocess and postprocess functions, which the only required implementation for custom metric is compute.
      • Both string and Metric objects can be passed to Jury(metrics=metrics) now in a mixed fashion.
      • load_metric function was rearranged to capture end score results and several metrics added accordingly (e.g. load_metric("squad_f1") will load squad metric which returns F1-score).
    • Example notebook has added to example.
      • MT and QA tasks were illustrated.
      • Custom metric creation added as example.

    Acknowledgments

    @fcakyon @cemilcengiz @devrimcavusoglu

    Source code(tar.gz)
    Source code(zip)
  • 0.0.3(Jul 26, 2021)

  • 0.0.2(Jul 14, 2021)

Owner
Open Business Software Solutions
Open Source for Open Business
Open Business Software Solutions
NLPretext packages in a unique library all the text preprocessing functions you need to ease your NLP project.

NLPretext packages in a unique library all the text preprocessing functions you need to ease your NLP project.

Artefact 114 Dec 15, 2022
Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning and Hydra.

OpenSpeech provides reference implementations of various ASR modeling papers and three languages recipe to perform tasks on automatic speech recogniti

Soohwan Kim 26 Dec 14, 2022
pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation

pkuseg:一个多领域中文分词工具包 (English Version) pkuseg 是基于论文[Luo et. al, 2019]的工具包。其简单易用,支持细分领域分词,有效提升了分词准确度。 目录 主要亮点 编译和安装 各类分词工具包的性能对比 使用方式 论文引用 作者 常见问题及解答 主要

LancoPKU 6k Dec 29, 2022
A 10000+ hours dataset for Chinese speech recognition

A 10000+ hours dataset for Chinese speech recognition

309 Dec 16, 2022
Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper

Data Augmentation using Pre-trained Transformer Models Code associated with the Data Augmentation using Pre-trained Transformer Models paper Code cont

44 Dec 31, 2022
I can help you convert your images to pdf file.

IMAGE TO PDF CONVERTER BOT Configs TOKEN - Get bot token from @BotFather API_ID - From my.telegram.org API_HASH - From my.telegram.org Deploy to Herok

MADUSHANKA 10 Dec 14, 2022
ChainKnowledgeGraph, 产业链知识图谱包括A股上市公司、行业和产品共3类实体

ChainKnowledgeGraph, 产业链知识图谱包括A股上市公司、行业和产品共3类实体,包括上市公司所属行业关系、行业上级关系、产品上游原材料关系、产品下游产品关系、公司主营产品、产品小类共6大类。 上市公司4,654家,行业511个,产品95,559条、上游材料56,824条,上级行业480条,下游产品390条,产品小类52,937条,所属行业3,946条。

liuhuanyong 415 Jan 06, 2023
Sentence boundary disambiguation tool for Japanese texts (日本語文境界判定器)

Bunkai Bunkai is a sentence boundary (SB) disambiguation tool for Japanese texts. Quick Start $ pip install bunkai $ echo -e '宿を予約しました♪!まだ2ヶ月も先だけど。早すぎ

Megagon Labs 160 Dec 23, 2022
DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time

DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time. While it efficiently searches the answers out of 60 billion phrases in Wikipedia, it is also v

Jinhyuk Lee 543 Jan 08, 2023
Source code and dataset for ACL 2019 paper "ERNIE: Enhanced Language Representation with Informative Entities"

ERNIE Source code and dataset for "ERNIE: Enhanced Language Representation with Informative Entities" Reqirements: Pytorch=0.4.1 Python3 tqdm boto3 r

THUNLP 1.3k Dec 30, 2022
Data preprocessing rosetta parser for python

datapreprocessing_rosetta_parser I've never done any NLP or text data processing before, so I wanted to use this hackathon as a learning opportunity,

ASReview hackathon for Follow the Money 2 Nov 28, 2021
Python package for performing Entity and Text Matching using Deep Learning.

DeepMatcher DeepMatcher is a Python package for performing entity and text matching using deep learning. It provides built-in neural networks and util

461 Dec 28, 2022
Large-scale pretraining for dialogue

A State-of-the-Art Large-scale Pretrained Response Generation Model (DialoGPT) This repository contains the source code and trained model for a large-

Microsoft 1.8k Jan 07, 2023
Behavioral Testing of Clinical NLP Models

Behavioral Testing of Clinical NLP Models This repository contains code for testing the behavior of clinical prediction models based on patient letter

Betty van Aken 2 Sep 20, 2022
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 5.1k Dec 26, 2022
Shared code for training sentence embeddings with Flax / JAX

flax-sentence-embeddings This repository will be used to share code for the Flax / JAX community event to train sentence embeddings on 1B+ training pa

Nils Reimers 23 Dec 30, 2022
Code for "Semantic Role Labeling as Dependency Parsing: Exploring Latent Tree Structures Inside Arguments".

Code for "Semantic Role Labeling as Dependency Parsing: Exploring Latent Tree Structures Inside Arguments".

Yu Zhang 50 Nov 08, 2022
Sentiment Analysis Project using Count Vectorizer and TF-IDF Vectorizer

Sentiment Analysis Project This project contains two sentiment analysis programs for Hotel Reviews using a Hotel Reviews dataset from Datafiniti. The

Simran Farrukh 0 Mar 28, 2022
An open collection of annotated voices in Japanese language

声庭 (Koniwa): オープンな日本語音声とアノテーションのコレクション Koniwa (声庭): An open collection of annotated voices in Japanese language 概要 Koniwa(声庭)は利用・修正・再配布が自由でオープンな音声とアノテ

Koniwa project 32 Dec 14, 2022
Sequence Modeling with Structured State Spaces

Structured State Spaces for Sequence Modeling This repository provides implementations and experiments for the following papers. S4 Efficiently Modeli

HazyResearch 902 Jan 06, 2023