METS/ALTO OCR enhancing tool by the National Library of Luxembourg (BnL)

Overview

Nautilus-OCR

The National Library of Luxembourg (BnL) started its first initiative in digitizing newspapers, with layout recognition and OCR on article level, back in 2006. Service providers were asked to create images of excellent quality, to run an optical layout recognition process, to identify articles and to run OCR on them. The data was modeled according to the METS/ALTO standard. In the meantime however, the potential of OCR software increased.

Developed by BnL in the context of its Open Data initiative, Nautilus-OCR uses these improvements in technology and the already structured data to rerun and enhance OCR. Nautilus-OCR can be used in two ways:

  1. Main purpose: Enhance the OCR quality of original (ori) METS/ALTO packages.

    drawing
    Nautilus-OCR METS/ALTO to METS/ALTO pipeline:
    - Extracts all ori images/text pairs
    - Targets a specific set of block types
    - Uses enhancement prediction on every target to possibly run OCR
    - Integrates new outputs into an updated METS/ALTO package

  2. Alternatively: Use as a regular OCR engine that is applied on a set of images.

    drawing
    Nautilus-OCR provides the possibility to visually compare ori (left) to new (right) outputs.

Key features:
  • Custom model training.
  • Included pre-trained OCR, font recognition and enhancement prediction models.
  • METS/ALTO to METS/ALTO using enhancement prediction.
  • Fast, multi-font OCR pipeline.

Nautilus-OCR is mainly built on open-source libraries combined with some proprietary contributions. Please note that the project is trying to be a generalized version of a tailored implementation for the specific needs of BnL.

Table of Contents

Quick Start

After having followed the installation instructions, Nautilus-OCR can be run by using the included BnL models and example METS/ALTO data.

With nautilusocr/ as the current working directory, first copy the BnL models to the final/ folder.1

cp models/bnl/* models/final/

Next, run enhance on the examples/ directory, containg a single mets-alto-package/

python3 src/main.py enhance -d examples/ -r 0.02

to generate new ALTO files for every block with a minimum enhancement prediction of 2%. Finally, the newly generated files can be located in output/.

1 As explained in models/final/README.md, the models within models/final/ are automatically applied when executing the enhance, train-epr, ocr and test-ocr actions. Models outside of models/final/ are supposed to be stored for testing and comparison purposes.

Requirements

Nautilus-OCR requires:

  • Linux / macOS
    The software requires dependencies that only work on Linux and macOS. Windows is not supported at the moment.
  • Python 3.8+
    The software has been developed using Python 3.8.5.
  • Dependencies
    Access to the libraries listed in requirements.txt.
  • METS/ALTO
    METS/ALTO packages as data, or alternatively TextBlock images representing single-column snippets of text.

Installation

With Python3 (tested on version 3.8.5) installed, clone this repostitory and install the required dependencies:

git clone https://github.com/natliblux/nautilusocr
cd nautilusocr
pip3 install -r requirements.txt

Hunspell dependency might require:

apt-get install libhunspell-dev
brew install hunspell

OpenCV dependency might require:

apt install libgl1-mesa-glx
apt install libcudart10.1

You can test that all dependencies have been sucessfully installed by running

python3 src/main.py -h

and looking for the following output:

Starting Nautilus-OCR

usage: main.py [-h] {set-ocr,train-ocr,test-ocr,enhance,ocr,set-fcr,train-fcr,test-fcr,test-seg,train-epr,test-epr} ...

Nautilus-OCR Command Line Tool

positional arguments:
  {set-ocr,train-ocr,test-ocr,enhance,ocr,set-fcr,train-fcr,test-fcr,test-seg,train-epr,test-epr}
                        sub-command help

optional arguments:
  -h, --help            show this help message and exit

Workflow

The command-line tool consists of four different modules, with each one exposing a predefined set of actions:

  • ocr - optical character recognition
  • seg - text line segmentation
  • fcr - font class recognition
  • epr - enhancement prediction

To get started, one should take note of the options available in config.ini and most importantly set the device (CPU/GPU) parameter and decide on the set of font_classes and supported_languages. Next, a general workflow could looks as follows:

  1. Test the seg algorithm using test-seg to see whether any parameters need to be adjusted.
  2. Create a fcr train set using set-fcr based on font ground truth information.
  3. Train a fcr model using train-fcr.
  4. Test the fcr model accurcy using test-fcr.
  5. Create an ocr train set using set-ocr based on ocr ground truth information.
  6. Train an ocr model for every font class using train-ocr.
  7. Test the ocr model for every font class using test-ocr.
  8. Train an epr model based on ground truth and ori data using train-epr.
  9. Test the epr model accuracy using test-epr.
  10. Enhance METS/ALTO packages using enhance.
  11. Alternatively: Run ocr on a set of images using ocr.

This id done by calling main.py followed by the desired action and options:

python3 src/main.py [action] [options]

The following module sections will list all available actions and options.

Modules

optical character recognition

set-ocr

Creates an ocr train set consisting of image/text line pairs. Every pair is of type New, Artificial or Existing:

  • New: Extracted using an image and ALTO file.
  • Generated: Image part of pair is generated artificially based on given input text.
  • Existing: Pair exists already (has been prepared beforehand) and is included in the train set.
Option Default Explanation
-j --jsonl Path to jsonl file referencing image and ALTO files 1 2
-c --confidence 9 (max tolerant) Highest tolerated confidence value for every character in line
-m --model fcr-model Name of fcr model to be used in absence of font class indication 3
-e --existing Path to directory containing existing pairs 4 5
-g --generated 0 (none) Number of artificially generated pairs to be added per font class 6 7
-t --text Path to text file containing text for artificial pairs 8
-n --nlines -1 (max) Maximum number of pairs per font class
-s --set ocr-train-set Name of ocr train set

1 Example lines:

{"image": "/path/image1.png", "gt": "/path/alto1.xml"}
{"image": "/path/image2.png", "gt": "/path/alto2.xml", "gt-block-id": "TB1"}
{"image": "/path/image3.png", "gt": "/path/alto3.xml", "gt-block-id": "TB2", "font": "fraktur"}

2 Key gt-block-id can optionally reference a single block in a multi-block ALTO file.
3 Absence of font key means that -m option must be set to automatically determine the font class.
4 Naming convention for existing pairs: [pair-name].png/.tif & [pair-name].gt.txt.
5 Image part of existing pairs is supposed to be unbinarized.
6 Artificially generated lines represent lower quality examples for the model to learn from.
7 Fonts in fonts/artificial/ are being randomly used and can be adjusted per font class.
8 Text is given by a .txt file with individual words delimited by spaces and line breaks.

train-ocr

Trains an ocr model for a specific font using an ocr train set.

Option Default Explanation
-s --set Name of ocr train set to be used
-f --font Name of font that ocr model should be trained on
-m --model ocr-model Name of ocr model to be created

test-ocr

Tests models in models/final/ on a test set defined by a jsonl file.
A comparison to the original ocr data can optionally be drawn.

Option Default Explanation
-j --jsonl Path to jsonl file referencing image and ground truth ALTO files 1 2
-i --image False Generate output image comparing ocr output with source image
-c --confidence False Add ocr confidence (through font greyscale level) to output image

1 Example lines:

{"id": "001", "image": "/path/image1.png", "gt": "/path/alto1.xml"}
{"id": "002", "image": "/path/image2.png", "gt": "/path/alto2.xml", "gt-block-id": "TB1"}
{"id": "003", "image": "/path/image3.png", "gt": "/path/alto3.xml", "gt-block-id": "TB2", "ori": "/path2/alto3.xml"}
{"id": "004", "image": "/path/image4.png", "gt": "/path/alto4.xml", "gt-block-id": "TB3", "ori": "/path2/alto4.xml", "ori-block-id": "TB4"}

2 Keys ori and ori-block-id can optionally reference original ocr output for comparison purposes.

enhance

Applies ocr on a set of original METS/ALTO packages, while aiming to enhance ocr accuracy.1
An optional enhancement prediction model can prevent running ocr for some target blocks.
Models in models/final/ are automatically used for this action.2

Option Default Explanation
-d --directory Path to directory containing all orignal METS/ALTO packages 3 4
-r --required 0.0 Value for minimum required enhancement prediction 5

1 Target text block types can be adjusted in config.ini.
2 The presence of an epr model is optional.
3 METS files need to end in -mets.xml.
4 Every package name should be unique and is defined as the directory name of the METS file.
5 Enhancement predictions are in range [-1,1], set to -1 to disable epr and automatically reprocess all target blocks.

ocr

Applies ocr on a directory of images while using the models in models/final/.

Option Default Explanation
-d --directory Path to directory containing target ocr source images 1
-a --alto False Output ocr in ALTO format
-i --image False Generate output image comparing ocr with source image
-c --confidence False Add ocr confidence (through font greyscale level) to output image

1 Subdirectories possible, images should be in .png or .tif format.

text line segmentation

test-seg

Tests the CombiSeg segmentation algorithm on a test set defined by a jsonl file. The correct functionning of the segmentation algorithm is essential for most other modules and actions.
The default parameters should generally work well, however they can be adjusted. 1

Option Default Explanation
-j --jsonl path to jsonl file referencing image and ALTO files 2

1 Algorithm parameters can be adjusted in config.ini in case of unsatisfactory performance.
2 Example lines:

{"image": "/path/image1.png", "gt": "/path/alto1.xml"}
{"image": "/path/image2.png", "gt": "/path/alto2.xml", "gt-block-id": "TB1"}

font class recognition

set-fcr

Creates a fcr train set consisting of individual character images.

Option Default Explanation
-j --jsonl Path to jsonl file referencing image files and the respective font classes 1
-n --nchars max Maximum number of characters extracted from every image 2
-s --set fcr-train-set Name of fcr train set

1 Example line:

{"image": "/path/image.png", "font": "fraktur"}

2 Fewer extracted chars for a larger amount of images generally leads to a more diverse train set.

train-fcr

Trains a fcr model using a fcr train set.

Option Default Explanation
-s --set Name of fcr train set
-m --model fcr-model Name of fcr model to be created

test-fcr

Tests a fcr model on a test set defined by a jsonl file.

Option Default Explanation
-j --jsonl Path to jsonl file referencing image files and the respective font classes 1
-m --model fcr-model Name of fcr model to be tested

1 Example line:

{"image": "/path/image.png", "font": "fraktur"}

enhancement prediction

This module requires language dictionaries. For all language xx in supported_languages in config.ini, please either add a list of words as xx.txt or the Hunspell files xx.dic and xx.aff to dicts/.

train-epr

Trains an epr model (for use in enhance) that predicts the enhancement in ocr accuracy (from ori to new) and can hence be used to prevent ocr from running on all target blocks.
Please take note of the parameters in config.ini before starting training.
This action uses the models in models/final/.

Option Default Explanation
-j --jsonl Path to jsonl file referencing image, ground truth ALTO and original ALTO files 1
-m --model epr-model Name of epr model to be created

1 Example lines:

{"image": "/path/image1.png", "gt": "/path/alto1.xml", "ori": "/path/alto1.xml", "year": 1859}
{"image": "/path/image2.png", "gt": "/path/alto2.xml", "gt-block-id": "TB1", "ori": "/path/alto2.xml", "year": 1859}
{"image": "/path/image3.png", "gt": "/path/alto3.xml", "gt-block-id": "TB2", "ori": "/path/alto3.xml", "ori-block-id": "TB2", "year": 1859}

test-epr

Tests an epr model and returns the mean average error after applying leave-one-out cross-validation (kNN algorithm).

Option Default Explanation
-m --model epr-model Name of epr model to be tested

Models

Nautilus-OCR encloses four pre-trained models:

  • bnl-ocr-antiqua.mlmodel

OCR model built with kraken and trained on the antiqua data (70k pairs) of an extended version of bnl-ground-truth-newspapers-before-1878 that is not limited to the cut-off date of 1878.

  • bnl-ocr-fraktur.mlmodel

OCR model built with kraken and trained on the fraktur data (43k pairs) of an extended version of bnl-ground-truth-newspapers-before-1878 that is not limited to the cut-off date of 1878.

  • bnl-fcr.h5

Binary font recognition model built with TensorFlow and trained to perform classification using font classes [antiqua, fraktur]. Please note that the fcr module automatically extends the set of classes to [antiqua, fraktur, unknown], to cover for the case where the neural network input preprocessing fails. The model has been trained on 50k individual character images and showed 100% accuracy on a 200 image test set.

  • bnl-epr-de-fr-lb.jsonl

Enhancement prediction model trained on more than 4.5k text blocks for the language set [de, fr, lb]. Training data has been published between 1840 and 1960. Enhancement is predicted for the application of bnl-ocr-antiqua.mlmodel and bnl-ocr-fraktur.mlmodel, therefore based on font class set [antiqua, fraktur]. The model makes use of the dictionaries for all three languages within dicts/. Using leave-one-out cross-validation (kNN algorithm), mean average error of 0.024 was achieved.

Ground Truth

bnl-ground-truth-newspapers-before-1878

OCR ground truth dataset including more than 33k text line image/text pairs, split in antiqua (19k) and fraktur (14k) font classes. The set is based on Luxembourg historical newspapers in the public domain (published before 1878), written generally in German, French and Luxembourgish. Transcription was done using a double-keying technique with a minimum accuracy of 99.95%. Font class was automatically determined using bnl-fcr.h5.

Libraries

Nautilus-OCR is mostly built on open-source libraries, with the most important ones being:

License

License: GPL v3

See COPYING to see full text.

Credits

Thanks and credits go to the Lexicolux project, whose work is the basis for the generation of dicts/lb.txt.

Contact

If you want to get in touch, please contact us here.

Owner
National Library of Luxembourg
National Library of Luxembourg
Using machine learning to predict undergrad college admissions.

College-Prediction Project- Overview: Many have tried, many have failed. Few trailblazers are ambitious enought to chase acceptance into the top 15 un

John H Klinges 1 Jan 05, 2022
An SMPC companion library for Syft

SyMPC A library that extends PySyft with SMPC support SyMPC /ˈsɪmpəθi/ is a library which extends PySyft ≥0.3 with SMPC support. It allows computing o

Arturo Marquez Flores 0 Oct 13, 2021
Implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021).

[PDF] | [Slides] The official implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021 Long talk) Installation Inst

MilaGraph 117 Dec 09, 2022
[2021][ICCV][FSNet] Full-Duplex Strategy for Video Object Segmentation

Full-Duplex Strategy for Video Object Segmentation (ICCV, 2021) Authors: Ge-Peng Ji, Keren Fu, Zhe Wu, Deng-Ping Fan*, Jianbing Shen, & Ling Shao This

Daniel-Ji 55 Dec 22, 2022
Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Chen Guo 58 Dec 24, 2022
deep-prae

Deep Probabilistic Accelerated Evaluation (Deep-PrAE) Our work presents an efficient rare event simulation methodology for black box autonomy using Im

Safe AI Lab 4 Apr 17, 2021
This is an official pytorch implementation of Fast Fourier Convolution.

Fast Fourier Convolution (FFC) for Image Classification This is the official code of Fast Fourier Convolution for image classification on ImageNet. Ma

pkumi 199 Jan 03, 2023
Codebase for Diffusion Models Beat GANS on Image Synthesis.

Codebase for Diffusion Models Beat GANS on Image Synthesis.

Katherine Crowson 128 Dec 02, 2022
공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다.

ObsCare_Main 소개 공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다. CCTV의 대수가 급격히 늘어나면서 관리와 효율성 문제와 더불어, 곳곳에 설치된 CCTV를 개별 관제하는 것으로는 응급 상

5 Jul 07, 2022
A mini library for Policy Gradients with Parameter-based Exploration, with reference implementation of the ClipUp optimizer from NNAISENSE.

PGPElib A mini library for Policy Gradients with Parameter-based Exploration [1] and friends. This library serves as a clean re-implementation of the

NNAISENSE 56 Jan 01, 2023
用opencv的dnn模块做yolov5目标检测,包含C++和Python两个版本的程序

yolov5-dnn-cpp-py yolov5s,yolov5l,yolov5m,yolov5x的onnx文件在百度云盘下载, 链接:https://pan.baidu.com/s/1d67LUlOoPFQy0MV39gpJiw 提取码:bayj python版本的主程序是main_yolov5.

365 Jan 04, 2023
"Exploring Vision Transformers for Fine-grained Classification" at CVPRW FGVC8

FGVC8 Exploring Vision Transformers for Fine-grained Classification paper presented at the CVPR 2021, The Eight Workshop on Fine-Grained Visual Catego

Marcos V. Conde 19 Dec 06, 2022
Complementary Patch for Weakly Supervised Semantic Segmentation, ICCV21 (poster)

CPN (ICCV2021) This is an implementation of Complementary Patch for Weakly Supervised Semantic Segmentation, which is accepted by ICCV2021 poster. Thi

Ferenas 20 Dec 12, 2022
Align before Fuse: Vision and Language Representation Learning with Momentum Distillation

This is the official PyTorch implementation of the ALBEF paper [Blog]. This repository supports pre-training on custom datasets, as well as finetuning on VQA, SNLI-VE, NLVR2, Image-Text Retrieval on

Salesforce 805 Jan 09, 2023
The Pytorch implementation for "Video-Text Pre-training with Learned Regions"

Region_Learner The Pytorch implementation for "Video-Text Pre-training with Learned Regions" (arxiv) We are still cleaning up the code further and pre

Rui Yan 0 Mar 20, 2022
Adaptive Dropblock Enhanced GenerativeAdversarial Networks for Hyperspectral Image Classification

This repo holds the codes of our paper: Adaptive Dropblock Enhanced GenerativeAdversarial Networks for Hyperspectral Image Classification, which is ac

Feng Gao 17 Dec 28, 2022
The code for two papers: Feedback Transformer and Expire-Span.

transformer-sequential This repo contains the code for two papers: Feedback Transformer Expire-Span The training code is structured for long sequentia

Facebook Research 125 Dec 25, 2022
Repository For Programmers Seeking a platform to show their skills

Programming-Nerds Repository For Programmers Seeking Pull Requests In hacktoberfest ❓ What's Hacktoberfest 2021? Hacktoberfest is the easiest way to g

42 Oct 29, 2022
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm

DeCLIP Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm. Our paper is available in arxiv Updates ** Ou

Sense-GVT 470 Dec 30, 2022
This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods

pyLiDAR-SLAM This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods, which can easily be evaluated

Kitware, Inc. 208 Dec 16, 2022