MMDA - multimodal document analysis

Related tags

Text Data & NLPmmda
Overview

MMDA - multimodal document analysis

This is work in progress...

Setup

conda create -n mmda python=3.8
pip install -r requirements.txt

Parsers

  • SymbolScraper - Apache 2.0

    • Quoted from their README: From the main directory, issue make. This will run the Maven build system, download dependencies, etc., compile source files and generate .jar files in ./target. Finally, a bash script bin/sscraper is generated, so that the program can be easily used in different directories.

Library walkthrough

1. Creating a Document for the first time

In this example, we use the SymbolScraperParser. Each parser implements its own .parse().

import os
from mmda.parsers.symbol_scraper_parser import SymbolScraperParser
from mmda.types.document import Document

ssparser = SymbolScraperParser(sscraper_bin_path='...')
doc: Document = ssparser.parse(infile='...pdf', outdir='...', outfname='...json')

Because we provided outdir and outfname, the document is also serialized for you:

assert os.path.exists(os.path.join(outdir, outfname))

2. Loading a serialized Document

Each parser implements its own .load().

doc: Document = ssparser.load(infile=os.path.join(outdir, outfname))

3. Iterating through a Document

The minimum requirement for a Document is its .text field, which is just a .

But the usefulness of this library really is when you have multiple different ways of segmenting the .text. For example:

for page in doc.pages:
    print(f'\n=== PAGE: {page.id} ===\n\n')
    for row in page.rows:
        print(row.text)

shows two nice aspects of this library:

  • Document provides iterables for different segmentations of text. Options include pages, tokens, rows, sents, blocks. Not every Parser will provide every segmentation, though. For example, SymbolScraperParser only provides pages, tokens, rows.

  • Each one of these segments (precisely, DocSpan objects) is aware of (and can access) other segment types. For example, you can call page.rows to get all Rows that intersect a particular Page. Or you can call sent.tokens to get all Tokens that intersect a particular Sentence. Or you can call sent.block to get the Block(s) that intersect a particular Sentence. These indexes are built dynamically when the Document is created and each time a new DocSpan type is loaded. In the extreme, one can do:

for page in doc.pages:
    for block in page.blocks:
        for sent in block.sents:
            for row in sent.rows:
                for token in sent.tokens:
                    pass

4. Loading new DocSpan type

Not all Documents will have all segmentations available at creation time. You may need to load new definitions to an existing Document.

It's strongly recommended to create the full Document using a Parser.load() but if you need to build it up step by step using the DocSpan class and Document.load() method:

from mmda.types.span import Span
from mmda.types.document import Document, DocSpan, Token, Page, Row, Sent, Block

doc: Document(text='I live in New York. I read the New York Times.')
page_jsons = [{'start': 0, 'end': 46, 'id': 0}]
sent_jsons = [{'start': 0, 'end': 19, 'id': 0}, {'start': 20, 'end': 46, 'id': 1}]

pages = [
    DocSpan.from_span(span=Span.from_json(span_json=page_json), 
                      doc=doc, 
                      span_type=Page)
    for page_json in page_jsons
]
sents = [
    DocSpan.from_span(span=Span.from_json(span_json=sent_json), 
                      doc=doc, 
                      span_type=Sent)
    for sent_json in sent_jsons
]

doc.load(sents=sents, pages=pages)

assert doc.sents
assert doc.pages

5. Changing the Document

We currently don't support any nice tools for mutating the data in a Document once it's been created, aside from loading new data. Do at your own risk.

But a note -- If you're editing something (e.g. replacing some DocSpan in tokens), always call:

Document._build_span_type_to_spans()
Document._build_span_type_to_index()

to keep the indices up-to-date with your modified DocSpan.

Comments
  • VILA predictor service

    VILA predictor service

    https://github.com/allenai/scholar/issues/29184

    REST service for VILA predictors

    Any code that I didn't comment on in this PR is s2age-maker boilerplate.

    opened by rodneykinney 12
  • cleanup Annotation class: remove `uuid`, pull `id` out of `metadata`, remove `dataclasses`, add `getter/setter` for `text` and `type`, make `Metadata()` take args

    cleanup Annotation class: remove `uuid`, pull `id` out of `metadata`, remove `dataclasses`, add `getter/setter` for `text` and `type`, make `Metadata()` take args

    @soldni im tryin to migrate off dataclass, but the tests are failing at:

    ERROR tests/test_eval/test_metrics.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_internal_ai2/test_api.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_parsers/test_grobid_header_parser.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_parsers/test_override.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_parsers/test_pdf_plumber_parser.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_predictors/test_bibentry_predictor.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_predictors/test_dictionary_word_predictor.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_predictors/test_span_group_classification_predictor.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_predictors/test_vila_predictors.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_types/test_document.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_types/test_indexers.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_types/test_json_conversion.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_types/test_metadata.py - TypeError: add_deprecated_field only works on dataclasses
    ERROR tests/test_types/test_span_group.py - TypeError: add_deprecated_field only works on dataclasses
    

    not sure how best to handle

    opened by kyleclo 7
  • Add attributes to API data classes

    Add attributes to API data classes

    This PR adds metadata to API data classes.

    API data classes and mmda types differ in a few significant aspects:

    • id and type (and text for SpanGroup) are stored in metadata for mmda types; in the APIs, they are part of the top-level attributes.
    • metadata can store arbitrary content in the mmda types; in the data API, all attributes that are not explicitly declared are dropped.
      • we expect applications that require specific fields to declare custom Metadata and SpanGroup/BoxGroup classes that inherit from their parent class. For an example, see tests/test_internal_ai2/test_api.py
      • metadata entries are mapped to an attributes field to match how data is stored in the Annotation Store
    opened by soldni 4
  • Egork/merge spans

    Egork/merge spans

    Adding a class to the utils for merging spans with optional parameter of x, y. x, y are used as a distance added to the boundaries of the boxes to decided if they overlap.

    Here is an example of tokens which are represented by list of spans, the task is to merge them into a single span and box image

    Result of merging tokens with x=0.04387334, y=0.01421097 being average size of the token in the document image

    Another example with same x=0.04387334, y=0.01421097

    image image

    opened by comorado 4
  • Bib Entry Parser/Predictor

    Bib Entry Parser/Predictor

    https://github.com/allenai/scholar/issues/32461

    Pretty standard model and interface implementation. You can see the result on http://bibentry-predictor.v0.dev.models.s2.allenai.org/ or

    curl -X 'POST' \
      'http://bibentry-predictor.v0.dev.models.s2.allenai.org/invocations' \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '{
      "instances": [
        {
          "bib_entry": "[4] Wei Zhuo, Qianyi Zhan, Yuan Liu, Zhenping Xie, and Jing Lu. Context attention heterogeneous network embed- ding. Computational Intelligence and Neuroscience , 2019. doi: 10.1155/2019/8106073."
        }
      ]
    }'
    

    Tests: integration test passed and dev deployment works

    TODO: release as version 0.0.10 after merge

    opened by stefanc-ai2 4
  • Dockerized pipeline

    Dockerized pipeline

    Pattern for wrapping services in a lighter-weight containers.

    • Replace full sub-projects in the services directory with a Dockerfile plus a xxx_api.py file for each service.
    • Add a docker-compose file that build the services plus a python container for running a REPL or scripts
    • Add pipeline.py sample end-to-end pipeline script using the services
    • Add run-pipeline.sh file to run pipeline

    @rauthur

    opened by rodneykinney 4
  • MMDA predictor evaluation

    MMDA predictor evaluation

    • [x] Grobid implemented as a Parser
    • [x] Script to obtain S2-VLUE (check w/ shannons on location)
    • [x] Script to run S2-VLUE through an mmda.Predictor & obtain evaluation metrics
    • [x] Definition/implementation of end-to-end evaluation metrics.
    S2-VLUE evaluation looks like this:
    [('VILA', 'title'), ('a', 'title'), ('new', 'title')...] 
    and evaluation associated with it in VILA paper is token-level F1.
    
    But if we want to compare against GROBID or other systsems that use different parsers (i.e. different ._symbols and .tokens), what we want for evaluation is
    {'title': 'VILA a new...', ...}
    
    Our evaluation metric is based on string match / edit-distance metrics.  
    

    Focus for now on title and abstract. Other S2-VLUE classes may not even exist in Grobid or the other tools we want to compare against. If we have time, other types of content-types we'd want are:

    • Section names
    • Author names
    • Bibliographies (split out; optionally, also-parsed)
    • Body text
    • Captions
    • Footnotes
    • Tables/Figures
    opened by kyleclo 4
  • Speed up vila pre-processing

    Speed up vila pre-processing

    From earlier testing, I remember that convert_document_page_to_pdf_dict takes a significant fraction of the total prediction time for vila. Here's a simple change that does all the work in a single iteration over tokens instead of multiple list comprehensions. I tested this on some production PDFs and saw a 3x speed-up.

    @cmwilhelm @yoganandc

    https://github.com/allenai/scholar/issues/32695

    opened by rodneykinney 3
  • Kylel/2022 09/span group utils

    Kylel/2022 09/span group utils

    minor PR: added tests for a pretty important functionality that was being used -- how to combine Spans that are next to each other into a single big Span. the key thing that was undocumented was that Boxes for the underlying Spans actually disappear after this merging functionality, which the tests now capture.

    dont think this is intended behavior we want to support in future, but for now, this is how this utility is being used

    opened by kyleclo 2
  • `Document._annotate_box_group` returns empty SpanGroups

    `Document._annotate_box_group` returns empty SpanGroups

    Very bizarre bug I've encounter when trying to annotate a document with blocks from layout parser. To reproduce, run following code:

    from mmda.parsers.pdfplumber_parser import PDFPlumberParser
    from mmda.rasterizers.rasterizer import PDF2ImageRasterizer
    from mmda.predictors.lp_predictors import LayoutParserPredictor
    
    import torch
    import warnings
    from cached_path import cached_path        # must install via `pip install cached_path`
    
    
    pdfplumber_parser = PDFPlumberParser()
    rasterizer = PDF2ImageRasterizer()
    layout_predictor = LayoutParserPredictor.from_pretrained(
        "lp://efficientdet/PubLayNet"
    )
    
    path = str(cached_path('https://arxiv.org/pdf/2110.08536.pdf'))
    doc = pdfplumber_parser.parse(path)
    images = rasterizer.rasterize(input_pdf_path=path, dpi=72)
    doc.annotate_images(images)
    
    with torch.no_grad(), warnings.catch_warnings():
        layout_regions = layout_predictor.predict(doc)
        doc.annotate(blocks=layout_regions)
    
    # these asserts should fail
    assert doc.blocks[0].spans == []
    assert doc.blocks[0].tokens == []
    

    I've done a bit of poking around and it seems to stem from the following snipped of code in mmda/types/document.py:

    derived_span_groups = sorted(
        derived_span_groups, key=lambda span_group: span_group.start
    )
    

    In particular, it seems like the spans attribute for each SpanGroup gets emptied after sorting.

    No clue what would be causing this, but perhaps there's an explanation?

    opened by soldni 2
  • VILA models crashing when bounding boxes are not int

    VILA models crashing when bounding boxes are not int

    VILA models crashing when bounding boxes are not int

    Because of changes in #69 , bounding boxes are now float instead of int, which VILA does not like:

      File "/Users/lucas/miniforge3/envs/pdod/lib/python3.10/site-packages/mmda/predictors/hf_predictors/vila_predictor.py", line 161, in predict
        model_outputs = self.model(**self.model_input_collator(model_inputs))
      File "/Users/lucas/miniforge3/envs/pdod/lib/python3.10/site-packages/mmda/predictors/hf_predictors/vila_predictor.py", line 178, in model_input_collator
        return {
      File "/Users/lucas/miniforge3/envs/pdod/lib/python3.10/site-packages/mmda/predictors/hf_predictors/vila_predictor.py", line 179, in <dictcomp>
        key: torch.tensor(val, dtype=torch.int64, device=self.device)
    TypeError: 'float' object cannot be interpreted as an integer
    

    This PR adds an explicit cast operation to get around this issue during pre-processing.

    opened by soldni 2
  • Bib predictor index error bug fix

    Bib predictor index error bug fix

    attempt at fix for part 1 of https://github.com/allenai/scholar/issues/34858 I think we can work around the index error that keeps popping up this way.

    tt verify integration test passes

    next steps:

    • [ ] tt push
    • [ ] update timo-services config for bib-predictor to use new code which will trigger new deployment
    opened by geli-gel 4
  • Add fontinfo to tokens without requiring word split

    Add fontinfo to tokens without requiring word split

    This PR appends font information (font name and size) as metadata to 'tokens' on a Document without requiring tokens to be split on that information (i.e., "best" effort if a token contains many font names or sizes). The code subclasses the WordExtractor provided by PDFPlumber.

    Currently, in a default configuration of PDFPlumberParser, tokens are already extracted with font name and size information (although it is discarded and only used for splitting). This could be added to the metadata as-is, however, the method used is the extra_attrs argument of PDFPlumber's extract_words which forces token splitting if font name and size does not match. I believe this is a bad default and further is not required (users can override this argument). The approach here guarantees this metadata will be captured.

    Re: "best effort" in case of many name/size options for a token: I have maintained the logic of just taking the font name and size from the first character in the token. Another approach provides the min, max and average font sizes (or others) as well as a set of font names. This could be provided in the future or by adapting the append_attrs argument. For current use cases (section nesting prediction) this approach is sufficient.

    opened by rauthur 0
  • Incomplete sentences in README

    Incomplete sentences in README

    I was going through the readme and noticed a couple sentences that start, but don't end. Since I'm new to the project, I don't know how to finish the sentences myself.

    Screen Shot 2022-11-22 at 8 57 37 AM Screen Shot 2022-11-22 at 8 57 17 AM documentation 
    opened by dmh43 0
  • cleanup JSON conversion for all data types

    cleanup JSON conversion for all data types

    Noticed JSON serialization had inconsistent behavior across various data types, especially in cases where certain fields were empty or None.

    This PR adds a set of comprehensive tests in tests/test_types/test_json_conversion.py that documents these behaviors. PR also includes resolving inconsistencies. For example, now Metadata that's attached to a SpanGroup or BoxGroup won't get accidentally serialized as an empty dictionary.

    opened by kyleclo 0
  • adding relations

    adding relations

    This PR extends this library functionality substantially -- Adding a new Annotation type called Relation. A Relation is a link between 2 annotations (e.g. a Citation linked to its Bib Entry). The input Annotations are called key and value.

    A few things needed to change to support Relations:

    Annotation Names

    Relations store references to Annotation objects. But we didn't want Relation.to_json() to also .to_json() those objects. We only want to store minimal identifiers of the key and value. Something short like bib_entry-5 or sentence-13. We call these short strings names.

    To do this, we added to Annotation class, an optional attribute called field: str which stores this name. It's automatically populated when you run Document.annotate(new_field = list_of_annotations); each of those input annotations will have the new field name stored under .field.

    We also added a method name() that returns the name of a particular Annotation object that is unique at the document-level. Names are a minimal class that basically stores .field and .id.

    In short, now after you annotate a Document with annotations, you can do stuff like:

    doc.tokens[15].name   ==   AnnotationName(field='tokens', id=15)
    str(annotation_name)  ==   'tokens-15'
    AnnotationName.from_str('tokens-15')  ==  AnnotationName(field='tokens', id=15)
    

    Lookups based on names

    To support reconstructing a Relation object given the names of key and value, we need the ability to lookup those involved Annotations. We introduce a new method to enable this:

    annotation_name = AnnotationName.from_str('paragraphs-99')
    a = document.locate_annotation( annotation_name )   -->  returns the specific Annotation object
    assert a.id == 99
    assert a.field == 'paragraphs'
    

    to and from JSON

    Finally, we need some way to serializing to JSON and reconstructing from JSON. For serialization, now that we have Names, this makes the JSON quite minimal:

    {'key': <name_of_key>, 'value': <name_of_value>, ...other stuff that all Annotation objects have,  like Metadata...}
    

    Reconstructing a Relation from JSON is more tricky because it's meaningless without a Document object. The Document object must also store the specific Annotations correctly so we can correctly perform the lookup based on these Names.

    The API for this is similar, but you must also pass in the Document object:

    relation = Relation.from_json(my_relation_dict, my_document_containing_necessary_fields)
    
    opened by kyleclo 2
Releases(0.2.7)
Text Classification in Turkish Texts with Bert

You can watch the details of the project on my youtube channel Project Interface Project Second Interface Goal= Correctly guessing the classification

42 Dec 31, 2022
A curated list of efficient attention modules

awesome-fast-attention A curated list of efficient attention modules

Sepehr Sameni 891 Dec 22, 2022
State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

John Snow Labs 3k Jan 05, 2023
A 10000+ hours dataset for Chinese speech recognition

A 10000+ hours dataset for Chinese speech recognition

309 Dec 16, 2022
CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus

CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus CVSS is a massively multilingual-to-English speech-to-speech translation corpus, co

Google Research Datasets 118 Jan 06, 2023
Skipgram Negative Sampling in PyTorch

PyTorch SGNS Word2Vec's SkipGramNegativeSampling in Python. Yet another but quite general negative sampling loss implemented in PyTorch. It can be use

Jamie J. Seol 287 Dec 14, 2022
String Gen + Word Checker

Creates random strings and checks if any of them are a real words. Mostly a waste of time ngl but it is cool to see it work and the fact that it can generate a real random word within10sec

1 Jan 06, 2022
Grover is a model for Neural Fake News -- both generation and detectio

Grover is a model for Neural Fake News -- both generation and detection. However, it probably can also be used for other generation tasks.

Rowan Zellers 856 Dec 24, 2022
Rhyme with AI

Local development Create a conda virtual environment and activate it: conda env create --file environment.yml conda activate rhyme-with-ai Install the

GoDataDriven 28 Nov 21, 2022
Connectionist Temporal Classification (CTC) decoding algorithms: best path, beam search, lexicon search, prefix search, and token passing. Implemented in Python.

CTC Decoding Algorithms Update 2021: installable Python package Python implementation of some common Connectionist Temporal Classification (CTC) decod

Harald Scheidl 736 Jan 03, 2023
Multilingual finetuning of Machine Translation model on low-resource languages. Project for Deep Natural Language Processing course.

Low-resource-Machine-Translation This repository contains the code for the project relative to the course Deep Natural Language Processing. The goal o

Andrea Cavallo 3 Jun 22, 2022
Text vectorization tool to outperform TFIDF for classification tasks

WHAT: Supervised text vectorization tool Textvec is a text vectorization tool, with the aim to implement all the "classic" text vectorization NLP meth

186 Dec 29, 2022
Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃

This repository provides a library for efficient training of masked language models (MLM), built with fairseq. We fork fairseq to give researchers mor

Princeton Natural Language Processing 92 Dec 27, 2022
Repositório da disciplina no semestre 2021-2

Avisos! Nenhum aviso! Compiladores 1 Este é o Git da disciplina Compiladores 1. Aqui ficará o material produzido em sala de aula assim como tarefas, w

6 May 13, 2022
⚡ Automatically decrypt encryptions without knowing the key or cipher, decode encodings, and crack hashes ⚡

Translations 🇩🇪 DE 🇫🇷 FR 🇭🇺 HU 🇮🇩 ID 🇮🇹 IT 🇳🇱 NL 🇧🇷 PT-BR 🇷🇺 RU 🇨🇳 ZH ➡️ Documentation | Discord | Installation Guide ⬅️ Fully autom

11.2k Jan 05, 2023
Mapping a variable-length sentence to a fixed-length vector using BERT model

Are you looking for X-as-service? Try the Cloud-Native Neural Search Framework for Any Kind of Data bert-as-service Using BERT model as a sentence enc

Han Xiao 11.1k Jan 01, 2023
🦅 Pretrained BigBird Model for Korean (up to 4096 tokens)

Pretrained BigBird Model for Korean What is BigBird • How to Use • Pretraining • Evaluation Result • Docs • Citation 한국어 | English What is BigBird? Bi

Jangwon Park 183 Dec 14, 2022
:hot_pepper: R²SQL: "Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing." (AAAI 2021)

R²SQL The PyTorch implementation of paper Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing. (AAAI 2021) Requirement

huybery 60 Dec 31, 2022
A Python/Pytorch app for easily synthesising human voices

Voice Cloning App A Python/Pytorch app for easily synthesising human voices Documentation Discord Server Video guide Voice Sharing Hub FAQ's System Re

Ben Andrew 840 Jan 04, 2023
An Explainable Leaderboard for NLP

ExplainaBoard: An Explainable Leaderboard for NLP Introduction | Website | Download | Backend | Paper | Video | Bib Introduction ExplainaBoard is an i

NeuLab 319 Dec 20, 2022