The Classical Language Toolkit

Overview

Notice: This Git branch (dev) contains the CLTK's upcoming major release (v. 1.0.0). See https://github.com/cltk/cltk/tree/master and https://docs.cltk.org/ for the legacy code and docs.

travis rtd codecov pypi zenodo binder

The Classical Language Toolkit (CLTK) is a Python library offering natural language processing (NLP) for the languages of pre–modern Eurasia.

Installation

For the CLTK's latest pre-release version:

$ pip install --pre cltk
Requirements:

Documentation

Documentation at https://dev.cltk.org.

Citation

@Misc{johnsonetal2014,
 author = {Johnson, Kyle P. and Patrick Burns and John Stewart and Todd Cook},
 title = {CLTK: The Classical Language Toolkit},
 url = {https://github.com/cltk/cltk},
 year = {2014--2020},
}

License

Copyright (c) 2014-2021 Kyle P. Johnson under the MIT License.

Comments
  • Add Sanskrit stopwords

    Add Sanskrit stopwords

    For @Akhilesh28. (Please assign this to yourself.)

    In Sanskrit, a stopword list would include, at least: pronouns and determiners (source), and upasarga (verbal prefix / "preverb" / "preposition") and nipāta (particle) (which I read about here). Also add anything like conjunctions, particles, and interjections.

    Just putting together this list shouldn't take more than one week. Let us know if you're having problems. You can post your stopwords here, first, as a "gist": https://gist.github.com/

    opened by kylepjohnson 55
  • Add IPA Phonetic Transcription for Greek

    Add IPA Phonetic Transcription for Greek

    This ticket is for Jack Duff, with @jtauber generously assisting.

    The basic idea is to make a map of Greek letters and their IPA equivalents, something like:

    {'α': 'a',
    'αι', 'ai',
    'ζ': 'zd',
    'θ': 'tʰ'}
    

    Obviously, it won't all be so easy, due to proximal characters changing pronunciation (for example, "γ" being IPA "ɡ" but before ["κ", "χ", "γ", "μ"] becoming "ŋ").

    If you can get this down for Attic, then consider moving on to other dialects, like Ionic or Koine.

    Within the CLTK's architecture, the transliteration maps and logic should go into something like cltk/phonetics/greek/transcription.py. Or consider making a general transcription entry point at cltk/phonetics/transcription.py and then declaring a which language and dialect. I'll leave the implementation details to you two, though.

    enhancement 
    opened by kylepjohnson 51
  • Words to be added in Sanskrit's Stop Word Collection

    Words to be added in Sanskrit's Stop Word Collection

    • ~सः (He)~
    • ~स्वयम्(himself)~
    • तदीय(theres) -आसम्(be)
    • ज्ञा (have) -परि (with) -शक्नोति(can(verb)) -यद्(if) -कतम(which)

    add all the words in all their different cases, gender and and all 3 numbers(sin, dual, plural) . If you are doing it right, there must be 72 words exactly for each entity(including a few repetitions). Needs to be careful when it come to verb's word form, they are entirely different structures.

    File at: https://github.com/cltk/cltk/blob/master/cltk/stop/sanskrit/stops.py

    opened by nikheelpandey 42
  • Scraping srimad-bhagavadgita and valmiki ramayana.

    Scraping srimad-bhagavadgita and valmiki ramayana.

    • I am Scapping Sanskrit - English data from
      • Srimad-bhagavadgita : http://www.gitasupersite.iitk.ac.in/srimad
      • Valmiki Ramayana : http://www.valmiki.iitk.ac.in/

    Ping @kylepjohnson

    new corpus 
    opened by ghost 36
  • Add corpus for classical telugu

    Add corpus for classical telugu

    https://te.wikisource.org/wiki contains the classical telugu ithihasas, puranas, vedas, stothras, etc; So I would like to scrape them and add as a new corpus.

    Thank you.

    new corpus 
    opened by ghost 31
  • Make stopwords list for Old English

    Make stopwords list for Old English

    To generalize, I observe that there are different approaches to making stopword lists, based either on statistics (most common words, variously calculated) or grammar (definite and indefinite articles, pronouns, etc.) (or some combination).

    In doing this ticket, I would like you to do a little research on whether there exist any good lists for OE. If there is one, let's just take it. If not, we can do a little more research about what's right.

    enhancement easy 
    opened by kylepjohnson 29
  • Scraping Raw Classical Hindi Data

    Scraping Raw Classical Hindi Data

    I am scraping Raw Classical Hindi Data from http://ltrc.iiit.ac.in/showfile.php?filename=downloads/Classical_Hindi_Literature/SHUSHA/index.html @kylepjohnson

    new corpus 
    opened by Akirato 29
  • Add declining tool based on Collatinus and Eulexis ?

    Add declining tool based on Collatinus and Eulexis ?

    Hi there, It's been months I have been thinking about this and I do not think CLTK contains anything like that. Collatinus and Eulexis are two Lemmatizer and Decliners which are open source (their data is either open or easy to reconstruct. And they are a nice bunch of people).

    • Collatinus is in C
      • https://github.com/biblissima/collatinus is the most up to date source code for the flexer / lemmatizer
      • https://github.com/ycollatin/Collatinus-data is the repo for their data (but not up to date I guess ). It seems this is more up to date.
    • Eulexis is in php
      • https://github.com/biblissima/eulexis/blob/master/traitement.php For the whole code

    I'd be happy to convert the collatinus flexer for CLTK in the long run (give or take few months) but I think Eulexis and the lemmatizer part are out of my scope right now.

    What's your opinion on this ? This would help search APIs a lot for text which are not lemmatized.

    opened by PonteIneptique 28
  • Normalize Unicode throughout CLTK

    Normalize Unicode throughout CLTK

    I've been reading about normalize() and hope it will prevent normalization problems in the future. This builtin method solves the problem of accented characters made with combining diacritics not equaling precomposed characters. Examples of this appear in the testing library, where I have struggled to make two strings of accented Greek equal one another.

    Example of normalize() from Fluent Python by Luciano Ramalho (117-118):

    >>> from unicodedata import normalize
    >>> s1 = 'café' # composed "e" with acute accent
    >>> s2 = 'cafe\u0301' # decomposed "e" and acute accent 
    >>> len(s1), len(s2)
    (4, 5)
    >>> len(normalize('NFC', s1)), len(normalize('NFC', s2)) 
    (4, 4)
    >>> len(normalize('NFD', s1)), len(normalize('NFD', s2)) 
    (5, 5)
    >>> normalize('NFC', s1) == normalize('NFC', s2)
    True
    >>> normalize('NFD', s1) == normalize('NFD', s2) 
    True
    

    Solutions

    1. In core, use normalize with the argument 'NFC', as Fluent Python recommends. Not all Greek combining forms may reduce into precomposed … will need to be tested out.

    2. In tests, especially for assertEqual(), check that more complicated strings equal one another. Use normalize('NFC', <text>) on the comparison strings, too, if necessary.

    3. Use this to strip out accented characters coming from the PHI, which I don't do very gracefully here: https://github.com/kylepjohnson/cltk/blob/master/cltk/corpus/utils/formatter.py#L94

    Docs: https://docs.python.org/3.4/library/unicodedata.html#unicodedata.normalize

    enhancement 
    opened by kylepjohnson 25
  • add Latin WordNet API

    add Latin WordNet API

    The Latin WordNet API mimics the NLTK Princeton WordNet API in all major respects; however because the data is sourced from latinwordnet.exeter.ac.uk (rather than locally) a number of under-the-hood changes were made. Many access methods now return generators rather than lists, and in general the API is now 'lazy' where multiple HTTP requests would cause a bottleneck. The Resnick, Jiang-Conrath, and Lin similarity scoring functions work, but require availability of a corpus-based information content file (forthcoming).

    opened by wmshort 24
  • Write syllabifiers for Indian languages

    Write syllabifiers for Indian languages

    This ticket is for @soumyag213

    As discussed by email, you'll port this and related modules, to the CLTK, from the Indic NLP Library.

    For a first step, I'd like to see this working in your own repo, which you have started at: https://github.com/soumyag213/cltk-beginning-indo. In the README for this, I would like to see an example of its API. For example, I imagine you showing something like this is the Python shell (BTW I like iPython):

    In [1]: from indic_syllabifier import orthographic_syllabify
    In [2]: orthographic_syllabify('supercalifragilisticexpialidocious', 'tamil')
    Out[2]: 'su-per-cal-i-fra-gil-ist-ic-ex-pi-al-i-doc-ious'
    
    enhancement 
    opened by kylepjohnson 24
  • Processing text with square brackets using the Latin NLP pipeline

    Processing text with square brackets using the Latin NLP pipeline

    I noticed an anomaly processing Latin text with the default pipeline. The tokenizer fails to separate square brackets from the words they enclose.

    text = 'Benedictus XVI [Iosephus Aloisius Ratzinger] fuit papa et episcopus Romanus.'
    
    from cltk import NLP
    
    cltk_nlp = NLP('lat')
    cltk_nlp.analyze(text).tokens
    

    Result:

    ['Benedictus', 'XVI', '[Iosephus', 'Aloisius', 'Ratzinger]', 'fuit', 'papa', 'et', 'episcopus', 'Romanus', '.']
    

    The problem does not occur when the LatinWordTokenizer is used.

    from cltk.tokenizers.lat.lat import LatinWordTokenizer
    
    tokenizer = LatinWordTokenizer()
    tokenizer.tokenize(text)
    

    Result:

    ['Benedictus', 'XVI', '[', 'Iosephus', 'Aloisius', 'Ratzinger', ']', 'fuit', 'papa', 'et', 'episcopus', 'Romanus', '.']
    

    Environment: Windows 10 + python 3.9.13 + cltk 1.1.6.

    bug 
    opened by DavideMassidda 0
  • SpaCy process

    SpaCy process

    I added the spaCy process with a custom wrapper to translate Token from spacy to Word in cltk. The aim is to be able to use trained models provided by spaCy with CLTK.

    opened by clemsciences 0
  • A way to tell what tokens `LatinBackOffLemmatizer()` has failed to lemmatize

    A way to tell what tokens `LatinBackOffLemmatizer()` has failed to lemmatize

    In LatinBackOffLemmatizer() and the lemmatizers in its chain I can't seem to find an option to return an empty value (such as in OldEnglishDictionaryLemmatizer()'s best_guess=False option), instead of returning the input value, when the lemmatizer fails to assign a lemma.

    Without such an option, it doesn't seem possible to tell successful from unsuccessful lemmatization attempts programmatically, severely limiting the range of the lemmatizer's applications.

    question acknowledged feature-request 
    opened by langeslag 6
  • Bump certifi from 2022.5.18.1 to 2022.12.7

    Bump certifi from 2022.5.18.1 to 2022.12.7

    Bumps certifi from 2022.5.18.1 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Unicode issue with Greek accented vowels in prosody

    Unicode issue with Greek accented vowels in prosody

    Unicode has two code points for acute accented vowels, one in the Greek and Coptic block and one in the Greek extended block (for omicron they are U+03CC and U+1F79. The list of accented vowels only takes into account the acute accents in the Greek and Coptic block resulting in some vowels not being properly scanned.

    >>> from cltk.prosody.grc import Scansion
    >>> text_string = "πότνια, θῦμον"
    >>> Scansion()._make_syllables(text_string)
    [[['πότνι', 'α'], ['θῦ', 'μον']]]
    

    Expected behavior

    >>> from cltk.prosody.grc import Scansion
    >>> text_string = "πότνια, θῦμον"
    >>> Scansion()._make_syllables(text_string)
    [[['πο', 'τνι' , 'α'], ['θῦ', 'μον']]]
    

    Desktop

    • MacOS 13.0
    bug 
    opened by JoshuaCCampbell 1
  • Latin enclitic tokenizer broken?

    Latin enclitic tokenizer broken?

    Latin tokenizer does not separate -que, ne, ve. In line 147 of tokenizers/lat/lat.py I suggest: specific_tokens += [token[: -len(enclitic)]] + ["-"+enclitic] This fixed it for me.

    Mac OS 15.7 Python 3.9

    bug 
    opened by polycrates 3
Releases(1.0.15)
Owner
Classical Language Toolkit
Natural language processing for Classical languages
Classical Language Toolkit
Healthsea is a spaCy pipeline for analyzing user reviews of supplementary products for their effects on health.

Welcome to Healthsea ✨ Create better access to health with spaCy. Healthsea is a pipeline for analyzing user reviews to supplement products by extract

Explosion 75 Dec 19, 2022
BiQE: Code and dataset for the BiQE paper

BiQE: Bidirectional Query Embedding This repository includes code for BiQE and the datasets introduced in Answering Complex Queries in Knowledge Graph

Bhushan Kotnis 1 Oct 20, 2021
Code for our paper "Transfer Learning for Sequence Generation: from Single-source to Multi-source" in ACL 2021.

TRICE: a task-agnostic transferring framework for multi-source sequence generation This is the source code of our work Transfer Learning for Sequence

THUNLP-MT 9 Jun 27, 2022
문장단위로 분절된 나무위키 데이터셋. Releases에서 다운로드 받거나, tfds-korean을 통해 다운로드 받으세요.

Namuwiki corpus 문장단위로 미리 분절된 나무위키 코퍼스. 목적이 LM등에서 사용하기 위한 데이터셋이라, 링크/이미지/테이블 등등이 잘려있습니다. 문장 단위 분절은 kss를 활용하였습니다. 라이선스는 나무위키에 명시된 바와 같이 CC BY-NC-SA 2.0

Jeong Ukjae 16 Apr 02, 2022
A library for finding knowledge neurons in pretrained transformer models.

knowledge-neurons An open source repository replicating the 2021 paper Knowledge Neurons in Pretrained Transformers by Dai et al., and extending the t

EleutherAI 96 Dec 21, 2022
A tool helps build a talk preview image by combining the given background image and talk event description

talk-preview-img-builder A tool helps build a talk preview image by combining the given background image and talk event description Installation and U

PyCon Taiwan 4 Aug 20, 2022
Arabic speech recognition, classification and text-to-speech.

klaam Arabic speech recognition, classification and text-to-speech using many advanced models like wave2vec and fastspeech2. This repository allows tr

ARBML 177 Dec 27, 2022
Python library for interactive topic model visualization. Port of the R LDAvis package.

pyLDAvis Python library for interactive topic model visualization. This is a port of the fabulous R package by Carson Sievert and Kenny Shirley. pyLDA

Ben Mabey 1.7k Dec 20, 2022
Fastseq 基于ONNXRUNTIME的文本生成加速框架

Fastseq 基于ONNXRUNTIME的文本生成加速框架

Jun Gao 9 Nov 09, 2021
STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs

STonKGs STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs. This multimodal Transformer combin

STonKGs 27 Aug 11, 2022
An assignment from my grad-level data mining course demonstrating some experience with NLP/neural networks/Pytorch

NLP-Pytorch-Assignment An assignment from my grad-level data mining course (before I started personal projects) demonstrating some experience with NLP

David Thorne 0 Feb 06, 2022
A workshop with several modules to help learn Feast, an open-source feature store

Workshop: Learning Feast This workshop aims to teach users about Feast, an open-source feature store. We explain concepts & best practices by example,

Feast 52 Jan 05, 2023
[Preprint] Escaping the Big Data Paradigm with Compact Transformers, 2021

Compact Transformers Preprint Link: Escaping the Big Data Paradigm with Compact Transformers By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Ab

SHI Lab 367 Dec 31, 2022
मराठी भाषा वाचविण्याचा एक प्रयास. इंग्रजी ते मराठीचा शब्दकोश. An attempt to preserve the Marathi language. A lightweight and ad free English to Marathi thesaurus.

For English, scroll down मराठी शब्द मराठी भाषा वाचवण्यासाठी मी हा ओपन सोर्स प्रोजेक्ट सुरू केला आहे. माझ्या मते, आपली भाषा हळूहळू आणि कोणाचाही लक्षात

मुक्त स्त्रोत 20 Oct 11, 2022
scikit-learn wrappers for Python fastText.

skift scikit-learn wrappers for Python fastText. from skift import FirstColFtClassifier df = pandas.DataFrame([['woof', 0], ['meow', 1]], colu

Shay Palachy 233 Sep 09, 2022
ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

AliceMind AliceMind: ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab This repository provides pre-trained encode

Alibaba 1.4k Jan 04, 2023
Auto-researching tool generating word documents.

About ResearchTE automates researching by generating document with answers to given questions. Supports getting results from: Google DuckDuckGo (with

1 Feb 14, 2022
A Streamlit web app that generates Rick and Morty stories using GPT2.

Rick and Morty Story Generator This project uses a pre-trained GPT2 model, which was fine-tuned on Rick and Morty transcripts, to generate new stories

₸ornike 33 Oct 13, 2022
Trained T5 and T5-large model for creating keywords from text

text to keywords Trained T5-base and T5-large model for creating keywords from text. Supported languages: ru Pretraining Large version | Pretraining B

Danil 61 Nov 24, 2022
Ukrainian TTS (text-to-speech) using Coqui TTS

title emoji colorFrom colorTo sdk app_file pinned Ukrainian TTS 🐸 green green gradio app.py false Ukrainian TTS 📢 🤖 Ukrainian TTS (text-to-speech)

Yurii Paniv 85 Dec 26, 2022