The Classical Language Toolkit

Overview

Notice: This Git branch (dev) contains the CLTK's upcoming major release (v. 1.0.0). See https://github.com/cltk/cltk/tree/master and https://docs.cltk.org/ for the legacy code and docs.

travis rtd codecov pypi zenodo binder

The Classical Language Toolkit (CLTK) is a Python library offering natural language processing (NLP) for the languages of pre–modern Eurasia.

Installation

For the CLTK's latest pre-release version:

$ pip install --pre cltk
Requirements:

Documentation

Documentation at https://dev.cltk.org.

Citation

@Misc{johnsonetal2014,
 author = {Johnson, Kyle P. and Patrick Burns and John Stewart and Todd Cook},
 title = {CLTK: The Classical Language Toolkit},
 url = {https://github.com/cltk/cltk},
 year = {2014--2020},
}

License

Copyright (c) 2014-2021 Kyle P. Johnson under the MIT License.

Comments
  • Add Sanskrit stopwords

    Add Sanskrit stopwords

    For @Akhilesh28. (Please assign this to yourself.)

    In Sanskrit, a stopword list would include, at least: pronouns and determiners (source), and upasarga (verbal prefix / "preverb" / "preposition") and nipāta (particle) (which I read about here). Also add anything like conjunctions, particles, and interjections.

    Just putting together this list shouldn't take more than one week. Let us know if you're having problems. You can post your stopwords here, first, as a "gist": https://gist.github.com/

    opened by kylepjohnson 55
  • Add IPA Phonetic Transcription for Greek

    Add IPA Phonetic Transcription for Greek

    This ticket is for Jack Duff, with @jtauber generously assisting.

    The basic idea is to make a map of Greek letters and their IPA equivalents, something like:

    {'α': 'a',
    'αι', 'ai',
    'ζ': 'zd',
    'θ': 'tʰ'}
    

    Obviously, it won't all be so easy, due to proximal characters changing pronunciation (for example, "γ" being IPA "ɡ" but before ["κ", "χ", "γ", "μ"] becoming "ŋ").

    If you can get this down for Attic, then consider moving on to other dialects, like Ionic or Koine.

    Within the CLTK's architecture, the transliteration maps and logic should go into something like cltk/phonetics/greek/transcription.py. Or consider making a general transcription entry point at cltk/phonetics/transcription.py and then declaring a which language and dialect. I'll leave the implementation details to you two, though.

    enhancement 
    opened by kylepjohnson 51
  • Words to be added in Sanskrit's Stop Word Collection

    Words to be added in Sanskrit's Stop Word Collection

    • ~सः (He)~
    • ~स्वयम्(himself)~
    • तदीय(theres) -आसम्(be)
    • ज्ञा (have) -परि (with) -शक्नोति(can(verb)) -यद्(if) -कतम(which)

    add all the words in all their different cases, gender and and all 3 numbers(sin, dual, plural) . If you are doing it right, there must be 72 words exactly for each entity(including a few repetitions). Needs to be careful when it come to verb's word form, they are entirely different structures.

    File at: https://github.com/cltk/cltk/blob/master/cltk/stop/sanskrit/stops.py

    opened by nikheelpandey 42
  • Scraping srimad-bhagavadgita and valmiki ramayana.

    Scraping srimad-bhagavadgita and valmiki ramayana.

    • I am Scapping Sanskrit - English data from
      • Srimad-bhagavadgita : http://www.gitasupersite.iitk.ac.in/srimad
      • Valmiki Ramayana : http://www.valmiki.iitk.ac.in/

    Ping @kylepjohnson

    new corpus 
    opened by ghost 36
  • Add corpus for classical telugu

    Add corpus for classical telugu

    https://te.wikisource.org/wiki contains the classical telugu ithihasas, puranas, vedas, stothras, etc; So I would like to scrape them and add as a new corpus.

    Thank you.

    new corpus 
    opened by ghost 31
  • Make stopwords list for Old English

    Make stopwords list for Old English

    To generalize, I observe that there are different approaches to making stopword lists, based either on statistics (most common words, variously calculated) or grammar (definite and indefinite articles, pronouns, etc.) (or some combination).

    In doing this ticket, I would like you to do a little research on whether there exist any good lists for OE. If there is one, let's just take it. If not, we can do a little more research about what's right.

    enhancement easy 
    opened by kylepjohnson 29
  • Scraping Raw Classical Hindi Data

    Scraping Raw Classical Hindi Data

    I am scraping Raw Classical Hindi Data from http://ltrc.iiit.ac.in/showfile.php?filename=downloads/Classical_Hindi_Literature/SHUSHA/index.html @kylepjohnson

    new corpus 
    opened by Akirato 29
  • Add declining tool based on Collatinus and Eulexis ?

    Add declining tool based on Collatinus and Eulexis ?

    Hi there, It's been months I have been thinking about this and I do not think CLTK contains anything like that. Collatinus and Eulexis are two Lemmatizer and Decliners which are open source (their data is either open or easy to reconstruct. And they are a nice bunch of people).

    • Collatinus is in C
      • https://github.com/biblissima/collatinus is the most up to date source code for the flexer / lemmatizer
      • https://github.com/ycollatin/Collatinus-data is the repo for their data (but not up to date I guess ). It seems this is more up to date.
    • Eulexis is in php
      • https://github.com/biblissima/eulexis/blob/master/traitement.php For the whole code

    I'd be happy to convert the collatinus flexer for CLTK in the long run (give or take few months) but I think Eulexis and the lemmatizer part are out of my scope right now.

    What's your opinion on this ? This would help search APIs a lot for text which are not lemmatized.

    opened by PonteIneptique 28
  • Normalize Unicode throughout CLTK

    Normalize Unicode throughout CLTK

    I've been reading about normalize() and hope it will prevent normalization problems in the future. This builtin method solves the problem of accented characters made with combining diacritics not equaling precomposed characters. Examples of this appear in the testing library, where I have struggled to make two strings of accented Greek equal one another.

    Example of normalize() from Fluent Python by Luciano Ramalho (117-118):

    >>> from unicodedata import normalize
    >>> s1 = 'café' # composed "e" with acute accent
    >>> s2 = 'cafe\u0301' # decomposed "e" and acute accent 
    >>> len(s1), len(s2)
    (4, 5)
    >>> len(normalize('NFC', s1)), len(normalize('NFC', s2)) 
    (4, 4)
    >>> len(normalize('NFD', s1)), len(normalize('NFD', s2)) 
    (5, 5)
    >>> normalize('NFC', s1) == normalize('NFC', s2)
    True
    >>> normalize('NFD', s1) == normalize('NFD', s2) 
    True
    

    Solutions

    1. In core, use normalize with the argument 'NFC', as Fluent Python recommends. Not all Greek combining forms may reduce into precomposed … will need to be tested out.

    2. In tests, especially for assertEqual(), check that more complicated strings equal one another. Use normalize('NFC', <text>) on the comparison strings, too, if necessary.

    3. Use this to strip out accented characters coming from the PHI, which I don't do very gracefully here: https://github.com/kylepjohnson/cltk/blob/master/cltk/corpus/utils/formatter.py#L94

    Docs: https://docs.python.org/3.4/library/unicodedata.html#unicodedata.normalize

    enhancement 
    opened by kylepjohnson 25
  • add Latin WordNet API

    add Latin WordNet API

    The Latin WordNet API mimics the NLTK Princeton WordNet API in all major respects; however because the data is sourced from latinwordnet.exeter.ac.uk (rather than locally) a number of under-the-hood changes were made. Many access methods now return generators rather than lists, and in general the API is now 'lazy' where multiple HTTP requests would cause a bottleneck. The Resnick, Jiang-Conrath, and Lin similarity scoring functions work, but require availability of a corpus-based information content file (forthcoming).

    opened by wmshort 24
  • Write syllabifiers for Indian languages

    Write syllabifiers for Indian languages

    This ticket is for @soumyag213

    As discussed by email, you'll port this and related modules, to the CLTK, from the Indic NLP Library.

    For a first step, I'd like to see this working in your own repo, which you have started at: https://github.com/soumyag213/cltk-beginning-indo. In the README for this, I would like to see an example of its API. For example, I imagine you showing something like this is the Python shell (BTW I like iPython):

    In [1]: from indic_syllabifier import orthographic_syllabify
    In [2]: orthographic_syllabify('supercalifragilisticexpialidocious', 'tamil')
    Out[2]: 'su-per-cal-i-fra-gil-ist-ic-ex-pi-al-i-doc-ious'
    
    enhancement 
    opened by kylepjohnson 24
  • Processing text with square brackets using the Latin NLP pipeline

    Processing text with square brackets using the Latin NLP pipeline

    I noticed an anomaly processing Latin text with the default pipeline. The tokenizer fails to separate square brackets from the words they enclose.

    text = 'Benedictus XVI [Iosephus Aloisius Ratzinger] fuit papa et episcopus Romanus.'
    
    from cltk import NLP
    
    cltk_nlp = NLP('lat')
    cltk_nlp.analyze(text).tokens
    

    Result:

    ['Benedictus', 'XVI', '[Iosephus', 'Aloisius', 'Ratzinger]', 'fuit', 'papa', 'et', 'episcopus', 'Romanus', '.']
    

    The problem does not occur when the LatinWordTokenizer is used.

    from cltk.tokenizers.lat.lat import LatinWordTokenizer
    
    tokenizer = LatinWordTokenizer()
    tokenizer.tokenize(text)
    

    Result:

    ['Benedictus', 'XVI', '[', 'Iosephus', 'Aloisius', 'Ratzinger', ']', 'fuit', 'papa', 'et', 'episcopus', 'Romanus', '.']
    

    Environment: Windows 10 + python 3.9.13 + cltk 1.1.6.

    bug 
    opened by DavideMassidda 0
  • SpaCy process

    SpaCy process

    I added the spaCy process with a custom wrapper to translate Token from spacy to Word in cltk. The aim is to be able to use trained models provided by spaCy with CLTK.

    opened by clemsciences 0
  • A way to tell what tokens `LatinBackOffLemmatizer()` has failed to lemmatize

    A way to tell what tokens `LatinBackOffLemmatizer()` has failed to lemmatize

    In LatinBackOffLemmatizer() and the lemmatizers in its chain I can't seem to find an option to return an empty value (such as in OldEnglishDictionaryLemmatizer()'s best_guess=False option), instead of returning the input value, when the lemmatizer fails to assign a lemma.

    Without such an option, it doesn't seem possible to tell successful from unsuccessful lemmatization attempts programmatically, severely limiting the range of the lemmatizer's applications.

    question acknowledged feature-request 
    opened by langeslag 6
  • Bump certifi from 2022.5.18.1 to 2022.12.7

    Bump certifi from 2022.5.18.1 to 2022.12.7

    Bumps certifi from 2022.5.18.1 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Unicode issue with Greek accented vowels in prosody

    Unicode issue with Greek accented vowels in prosody

    Unicode has two code points for acute accented vowels, one in the Greek and Coptic block and one in the Greek extended block (for omicron they are U+03CC and U+1F79. The list of accented vowels only takes into account the acute accents in the Greek and Coptic block resulting in some vowels not being properly scanned.

    >>> from cltk.prosody.grc import Scansion
    >>> text_string = "πότνια, θῦμον"
    >>> Scansion()._make_syllables(text_string)
    [[['πότνι', 'α'], ['θῦ', 'μον']]]
    

    Expected behavior

    >>> from cltk.prosody.grc import Scansion
    >>> text_string = "πότνια, θῦμον"
    >>> Scansion()._make_syllables(text_string)
    [[['πο', 'τνι' , 'α'], ['θῦ', 'μον']]]
    

    Desktop

    • MacOS 13.0
    bug 
    opened by JoshuaCCampbell 1
  • Latin enclitic tokenizer broken?

    Latin enclitic tokenizer broken?

    Latin tokenizer does not separate -que, ne, ve. In line 147 of tokenizers/lat/lat.py I suggest: specific_tokens += [token[: -len(enclitic)]] + ["-"+enclitic] This fixed it for me.

    Mac OS 15.7 Python 3.9

    bug 
    opened by polycrates 3
Releases(1.0.15)
Owner
Classical Language Toolkit
Natural language processing for Classical languages
Classical Language Toolkit
NLP made easy

GluonNLP: Your Choice of Deep Learning for NLP GluonNLP is a toolkit that helps you solve NLP problems. It provides easy-to-use tools that helps you l

Distributed (Deep) Machine Learning Community 2.5k Jan 04, 2023
Convolutional 2D Knowledge Graph Embeddings resources

ConvE Convolutional 2D Knowledge Graph Embeddings resources. Paper: Convolutional 2D Knowledge Graph Embeddings Used in the paper, but do not use thes

Tim Dettmers 586 Dec 24, 2022
A fast hierarchical dimensionality reduction algorithm.

h-NNE: Hierarchical Nearest Neighbor Embedding A fast hierarchical dimensionality reduction algorithm. h-NNE is a general purpose dimensionality reduc

Marios Koulakis 35 Dec 12, 2022
PyTorch implementation of Tacotron speech synthesis model.

tacotron_pytorch PyTorch implementation of Tacotron speech synthesis model. Inspired from keithito/tacotron. Currently not as much good speech quality

Ryuichi Yamamoto 279 Dec 09, 2022
Legal text retrieval for python

legal-text-retrieval Overview This system contains 2 steps: generate training data containing negative sample found by mixture score of cosine(tfidf)

Nguyễn Minh Phương 22 Dec 06, 2022
AI-Broad-casting - AI Broad casting with python

Basic Code 1. Use The Code Configuration Environment conda create -n code_base p

Language-Agnostic SEntence Representations

LASER Language-Agnostic SEntence Representations LASER is a library to calculate and use multilingual sentence embeddings. NEWS 2019/11/08 CCMatrix is

Facebook Research 3.2k Jan 04, 2023
Harvis is designed to automate your C2 Infrastructure.

Harvis Harvis is designed to automate your C2 Infrastructure, currently using Mythic C2. 📌 What is it? Harvis is a python tool to help you create mul

Thiago Mayllart 99 Oct 06, 2022
Pattern Matching in Python

Pattern Matching finalmente chega no Python 3.10. E daí? "Pattern matching", ou "correspondência de padrões" como é conhecido no Brasil. Algumas pesso

Fabricio Werneck 6 Feb 16, 2022
Code for the paper "Flexible Generation of Natural Language Deductions"

Code for the paper "Flexible Generation of Natural Language Deductions"

Kaj Bostrom 12 Nov 11, 2022
A sentence aligner for comparable corpora

About Yalign is a tool for extracting parallel sentences from comparable corpora. Statistical Machine Translation relies on parallel corpora (eg.. eur

Machinalis 128 Aug 24, 2022
MiCECo - Misskey Custom Emoji Counter

MiCECo Misskey Custom Emoji Counter Introduction This little script counts custo

7 Dec 25, 2022
English loanwords in the world's languages

Wiktionary as CLDF Content cldf1 and cldf2 contain cldf-conform data sets with a total of 2 377 756 entries about the vocabulary of all 1403 languages

Viktor Martinović 3 Jan 14, 2022
a chinese segment base on crf

Genius Genius是一个开源的python中文分词组件,采用 CRF(Conditional Random Field)条件随机场算法。 Feature 支持python2.x、python3.x以及pypy2.x。 支持简单的pinyin分词 支持用户自定义break 支持用户自定义合并词

duanhongyi 237 Nov 04, 2022
Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2.

Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2. It is trained (finetuned) on a curated list of approximately 45K Python (~470MB) files gathered from the

Galois Autocompleter 91 Sep 23, 2022
AMUSE - financial summarization

AMUSE AMUSE - financial summarization Unzip data.zip Train new model: python FinAnalyze.py --task train --start 0 --count how many files,-1 for all

1 Jan 11, 2022
A list of NLP(Natural Language Processing) tutorials

NLP Tutorial A list of NLP(Natural Language Processing) tutorials built on PyTorch. Table of Contents A step-by-step tutorial on how to implement and

Allen Lee 1.3k Dec 25, 2022
Chinese NER(Named Entity Recognition) using BERT(Softmax, CRF, Span)

Chinese NER(Named Entity Recognition) using BERT(Softmax, CRF, Span)

Weitang Liu 1.6k Jan 03, 2023
Anomaly Detection 이상치 탐지 전처리 모듈

Anomaly Detection 시계열 데이터에 대한 이상치 탐지 1. Kernel Density Estimation을 활용한 이상치 탐지 train_data_path와 test_data_path에 존재하는 시점 정보를 포함하고 있는 csv 형태의 train data와

CLUST-consortium 43 Nov 28, 2022
Code for evaluating Japanese pretrained models provided by NTT Ltd.

japanese-dialog-transformers 日本語の説明文はこちら This repository provides the information necessary to evaluate the Japanese Transformer Encoder-decoder dialo

NTT Communication Science Laboratories 216 Dec 22, 2022