Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Overview

Pattern

Build Status Coverage PyPi version License

Pattern is a web mining module for Python. It has tools for:

  • Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM parser
  • Natural Language Processing: part-of-speech taggers, n-gram search, sentiment analysis, WordNet
  • Machine Learning: vector space model, clustering, classification (KNN, SVM, Perceptron)
  • Network Analysis: graph centrality and visualization.

It is well documented, thoroughly tested with 350+ unit tests and comes bundled with 50+ examples. The source code is licensed under BSD.

Example workflow

Example

This example trains a classifier on adjectives mined from Twitter using Python 3. First, tweets that contain hashtag #win or #fail are collected. For example: "$20 tip off a sweet little old lady today #win". The word part-of-speech tags are then parsed, keeping only adjectives. Each tweet is transformed to a vector, a dictionary of adjective → count items, labeled WIN or FAIL. The classifier uses the vectors to learn which other tweets look more like WIN or more like FAIL.

from pattern.web import Twitter
from pattern.en import tag
from pattern.vector import KNN, count

twitter, knn = Twitter(), KNN()

for i in range(1, 3):
    for tweet in twitter.search('#win OR #fail', start=i, count=100):
        s = tweet.text.lower()
        p = '#win' in s and 'WIN' or 'FAIL'
        v = tag(s)
        v = [word for word, pos in v if pos == 'JJ'] # JJ = adjective
        v = count(v) # {'sweet': 1}
        if v:
            knn.train(v, type=p)

print(knn.classify('sweet potato burger'))
print(knn.classify('stupid autocorrect'))

Installation

Pattern supports Python 2.7 and Python 3.6. To install Pattern so that it is available in all your scripts, unzip the download and from the command line do:

cd pattern-3.6
python setup.py install

If you have pip, you can automatically download and install from the PyPI repository:

pip install pattern

If none of the above works, you can make Python aware of the module in three ways:

  • Put the pattern folder in the same folder as your script.
  • Put the pattern folder in the standard location for modules so it is available to all scripts:
    • c:\python36\Lib\site-packages\ (Windows),
    • /Library/Python/3.6/site-packages/ (Mac OS X),
    • /usr/lib/python3.6/site-packages/ (Unix).
  • Add the location of the module to sys.path in your script, before importing it:
MODULE = '/users/tom/desktop/pattern'
import sys; if MODULE not in sys.path: sys.path.append(MODULE)
from pattern.en import parsetree

Documentation

For documentation and examples see the user documentation.

Version

3.6

License

BSD, see LICENSE.txt for further details.

Reference

De Smedt, T., Daelemans, W. (2012). Pattern for Python. Journal of Machine Learning Research, 13, 2031–2035.

Contribute

The source code is hosted on GitHub and contributions or donations are welcomed.

Bundled dependencies

Pattern is bundled with the following data sets, algorithms and Python packages:

  • Brill tagger, Eric Brill
  • Brill tagger for Dutch, Jeroen Geertzen
  • Brill tagger for German, Gerold Schneider & Martin Volk
  • Brill tagger for Spanish, trained on Wikicorpus (Samuel Reese & Gemma Boleda et al.)
  • Brill tagger for French, trained on Lefff (Benoît Sagot & Lionel Clément et al.)
  • Brill tagger for Italian, mined from Wiktionary
  • English pluralization, Damian Conway
  • Spanish verb inflection, Fred Jehle
  • French verb inflection, Bob Salita
  • Graph JavaScript framework, Aslak Hellesoy & Dave Hoover
  • LIBSVM, Chih-Chung Chang & Chih-Jen Lin
  • LIBLINEAR, Rong-En Fan et al.
  • NetworkX centrality, Aric Hagberg, Dan Schult & Pieter Swart
  • spelling corrector, Peter Norvig

Acknowledgements

Authors:

Contributors (chronological):

  • Frederik De Bleser
  • Jason Wiener
  • Daniel Friesen
  • Jeroen Geertzen
  • Thomas Crombez
  • Ken Williams
  • Peteris Erins
  • Rajesh Nair
  • F. De Smedt
  • Radim Řehůřek
  • Tom Loredo
  • John DeBovis
  • Thomas Sileo
  • Gerold Schneider
  • Martin Volk
  • Samuel Joseph
  • Shubhanshu Mishra
  • Robert Elwell
  • Fred Jehle
  • Antoine Mazières + fabelier.org
  • Rémi de Zoeten + closealert.nl
  • Kenneth Koch
  • Jens Grivolla
  • Fabio Marfia
  • Steven Loria
  • Colin Molter + tevizz.com
  • Peter Bull
  • Maurizio Sambati
  • Dan Fu
  • Salvatore Di Dio
  • Vincent Van Asch
  • Frederik Elwert
Comments
  • new:irregular inflection of prefix verbs with known base

    new:irregular inflection of prefix verbs with known base

    Fix implementing logic to correctly identify the (irregular) base of prefixed verbs. Old:

    >>> conjugate('gehen', (de.PAST, 2, de.SINGULAR)) 
    'gingst' # correct
    >>> conjugate('vorgehen', (de.PAST, 2, de.SINGULAR))
    'gehtest vor' # incorrect
    

    Explanation: since 'vorgehen' is not found in the lexicon, a default regular inflection strategy applies. Even though the separable prefix is correctly identified, the base form thus extracted isn't checked against the lexicon and the available information about its irregular inflection thus lost.

    New:

    >>> conjugate('gehen', (de.PAST, 2, de.SINGULAR)) 
    'gingst' # correct
    >>> conjugate('vorgehen', (de.PAST, 2, de.SINGULAR))
    'gingst vor' # correct
    

    This fix is achieved with a second pass to lemma after stripping the prefix, to identify the known irregular inflection of the base form 'gehen'.

    Further, blacklists of verbs that look like they might be prefix verbs or latinate verbs with the suffix 'ier(en)' have been included to block the parser's exceptional treatment of those.

    opened by JakobSteixner 10
  • fix issue with pattern shadowing stdlib module `parser`

    fix issue with pattern shadowing stdlib module `parser`

    After installing pattern, previously working code started failing with

      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/npyio.py", line 348, in load
        return format.open_memmap(file, mode=mmap_mode)
      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/format.py", line 556, in open_memmap
        shape, fortran_order, dtype = read_array_header_1_0(fp)
      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/format.py", line 336, in read_array_header_1_0
        d = safe_eval(header)
      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/utils.py", line 1137, in safe_eval
        ast = compiler.parse(source, mode="eval")
      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/compiler/transformer.py", line 53, in parse
        return Transformer().parseexpr(buf)
      File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/compiler/transformer.py", line 132, in parseexpr
        return self.transform(parser.expr(text))
    AttributeError: 'module' object has no attribute 'expr'
    

    After digging around a bit, this stems from the standard module compiler doing import parser. Unfortunately, loading a parser from pattern with e.g. from pattern.en import parse makes the compiler module "see" the wrong parser -- pattern.text.en.parser instead of stdlib.

    My resolution was to manually import compiler before importing the rest of pattern, but it feels more like a hack. A better way is not to use stdlib module names, I think.

    opened by piskvorky 6
  • IndexError: list index out of range

    IndexError: list index out of range

    When I use a taxonomy search as in the below demo code, I get a stack trace and IndexError exception

    from pattern.en     import parsetree
    from pattern.search import search, Pattern, Constraint, Taxonomy, WordNetClassifier
    
    wn = Taxonomy()
    wn.classifiers.append(WordNetClassifier())
    
    p = Pattern()
    p.sequence.append(Constraint.fromstring("{COLOR?}", taxonomy=wn))
    
    pt = parsetree('the new iphone is availabe in silver, black, gold and white', relations=True, lemmata=True)
    print p.search(pt)
    

    Traceback (most recent call last): File "bug.py", line 11, in print p.search(pt) File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 746, in search a=[]; [a.extend(self.search(s)) for s in sentence]; return a File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 750, in search m = self.match(sentence, _v=v) File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 770, in match m = self._match(sequence, sentence, start) File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 838, in _match if i < len(sequence) and constraint.match(w): File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 620, in match for p in self.taxonomy.parents(s, recursive=True): File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 331, in parents return unique(dfs(self._normalize(term), recursive, {}, *_kwargs)) File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 327, in dfs a.extend(classifier.parents(term, *_kwargs) or []) File "/usr/local/lib/python2.7/dist-packages/pattern/text/search.py", line 415, in _parents try: return [w.senses[0] for w in self.wordnet.synsets(word, pos)[0].hypernyms()] IndexError: list index out of range

    opened by darenr 5
  • Calling download() twice on a Result object results in an error

    Calling download() twice on a Result object results in an error

    Example run

    rss = Newsfeed().search('http://feeds.feedburner.com/Techcrunch') dld = rss[4].download() dld = rss[4].download() Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/dist-packages/pattern/web/init.py", line 846, in download return URL(self.url).download(_args, *_kwargs) File "/usr/local/lib/python2.7/dist-packages/pattern/web/init.py", line 391, in download return cache.get(id, unicode=False) TypeError: get() takes no keyword arguments

    opened by jagatsastry 5
  • match groups in search syntax

    match groups in search syntax

    I'd love for the search syntax to have match groups just like regex. In my preference the ? symbol and () would have the same meanings as in regex syntax, so for example if I did:

    search('There be DT (JJ? NN+)', s)

    then I would get a match against "There is a red ball", and match item 0 would be "red ball", and it would also match "There is a ball" and match item 0 would be "ball".

    However I realise that if lots of people are relying on parentheses to mean optional then it wouldn't be easy to change that.

    Failing that, how about:

    search('There be DT <(JJ) NN+>', s)

    there are more semantically rich possibilities, e.g.

    <group1>(NN+)</group1>

    however I think that might be a little verbose, and get in the way of analyzing the search syntax which I think is better of as terse and as close as possible to regex (with which a lot of people are familiar)

    Many thanks in advance

    opened by tansaku 5
  • Pin Python and some dependencies versions to fix CI

    Pin Python and some dependencies versions to fix CI

    Pin the subversion of Python and fix the ci config to actually use it, as said in #262 . The subversion 3.6.5 was chosen as it is the latest known to pass in every test, which should be updated in the near future.

    opened by tales-aparecida 4
  • add bracket print statement

    add bracket print statement

    It shows error for python 3 because of brackets .

    In Setup.py

    • print n
    • print hashlib.sha256(open(z.filename).read()).hexdigest()

    In pattern/text/init.py

    • print '!'
    opened by ckshitij 4
  • Problems Head for Bing Search

    Problems Head for Bing Search

    Hi, i have a problem with pattern.web. Especifically with module search Engines Bing. When i get the results, i have this problem:

    pattern.web.URLError: Invalid header value 'Basic OlZuSkVLNEhUbG50RTNTeUY1OFFMa1VDTHAvNzh0a1lqVjFGbDNKN2xIYTA9\n'

    I copied exactly the examples, and stills this error. Actually, i have Python 2.7.12. In my server, i have a Python 2.7.9 and works fine, and my other computer i have Python 2.7.6 and works too.

    I checked all other libraries and the versions its same, minus Python. It may be that the version of Python is generating problems ?

    Thanks for all. Clips Pattern is amazing

    bug 
    opened by Leanwit 4
  • Financial data sentimental analysis return low polarity

    Financial data sentimental analysis return low polarity

    I use polarity function to asses the sentiment of financial data; what I observed is pattern polarity function tends to give false negatives in a financial data.

    A article contains this phrase tends to give negative result " .....industry is going up and stop loss should be placed at 20..."

    I think pattern mis interprets stop loss as a negative meaning.

    There are many financial sentiment dictionaries available in web can we use those dictionaries with pattern. If yes how can we do it?

    opened by ghost 4
  • wordnet issues

    wordnet issues

    I've made a fresh install of pattern-master yesterday and I'm running into issues with wordnet:

    from pattern.en import wordnet wordnet.synsets("train")

    Traceback (most recent call last): File "<pyshell#2>", line 1, in wordnet.synsets("train") File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/init.py", line 95, in synsets return [Synset(s.synset) for i, s in enumerate(w)] File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 316, in getitem return self.getSenses()[index] File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 242, in getSenses self._senses = tuple(map(getSense, self._synsetOffsets)) File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 241, in getSense return getSynset(pos, offset)[form] File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 1090, in getSynset return _dictionaryFor(pos).getSynset(offset) File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 827, in getSynset return _entityCache.get((pos, offset), loader) File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 1308, in get value = loadfn and loadfn() File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 826, in loader return Synset(pos, offset, _lineAt(dataFile, offset)) File "/usr/local/lib/python2.7/dist-packages/pattern/en/wordnet/pywordnet/wordnet.py", line 366, in init (self._senseTuples, remainder) = _partition(tokens[4:], 2, string.atoi(tokens[3], 16)) File "/usr/lib/python2.7/string.py", line 403, in atoi return _int(s, base) ValueError: invalid literal for int() with base 16: '@'

    this is happening in interactive use in IDLE.

    When running 06-example.py from the location of the unzipped download I get an error at a later moment:

    Traceback (most recent call last): File "/home/christiaan/Downloads/pattern-master/examples/03-en/06-wordnet.py", line 46, in s.append((a.similarity(b), word)) File "../../pattern/text/en/wordnet/init.py", line 272, in similarity lin = 2.0 * log(lcs(self, synset).ic) / (log(self.ic * synset.ic) or 1) ValueError: math domain error

    by the class function Synset.similarity, probably when it has to calculate the log of a negative number when working with the synsets for the words 'cat' and 'spaghetti'. Unfortunately for me this is exactly the function I'm interested in. I can see a temporary workaround for me by placing the pattern modules on the path of my project and adding in a try... except block to circumvent the ValueError, but it looks like something's broken in the wordnet implementation, although the first issue might just be a problem for my system setup/messy clips-pattern version updates.

    opened by christiaanw 4
  • sentiment does not return value between -1 and 1

    sentiment does not return value between -1 and 1

    Hellow,

    The doc states that sentiment returns a polarity value between -1 and 1 but this does not appear to be the case. E.g. the following code below gives an even lower value than -1. Why is this?

    from pattern.nl import sentiment sentiment("ik vind het heel vervelend als dat gebeurt")
    (-1.0133333333333332, -1.0133333333333332)

    opened by jwijffels 4
  • Vectorize inefficient python for loops with numpy

    Vectorize inefficient python for loops with numpy

    Hi Maintainers of this repo,

    Thank you very much for your excellent work, I am new to this repository. I am a researcher studying the best practices of evolving data science codes. According to our findings after examining 1000 data science repositories, migration of loop-based calculations is a widespread evolution practice among developers since it improves performance and code quality. I created this PR to make better use of NumPy functions and avoid unnecessary loops.

    This PR is a minor contribution compared to all the hard work that you have done in this repo. However, I am hoping that it will enhance code quality and, hopefully, performance.

    opened by maldil 1
  • pip install throws error - bin/sh: 1: mysql_config: not found

    pip install throws error - bin/sh: 1: mysql_config: not found

    After running pip install pattern getting below error

    [email protected]:~$ pip install pattern
    Defaulting to user installation because normal site-packages is not writeable
    Collecting pattern
      Using cached Pattern-3.6.0.tar.gz (22.2 MB)
      Preparing metadata (setup.py) ... done
    Requirement already satisfied: future in /usr/lib/python3/dist-packages (from pattern) (0.18.2)
    Collecting backports.csv
      Using cached backports.csv-1.0.7-py2.py3-none-any.whl (12 kB)
    Collecting mysqlclient
      Using cached mysqlclient-2.1.0.tar.gz (87 kB)
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
      
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [16 lines of output]
          /bin/sh: 1: mysql_config: not found
          /bin/sh: 1: mariadb_config: not found
          /bin/sh: 1: mysql_config: not found
          Traceback (most recent call last):
            File "<string>", line 2, in <module>
            File "<pip-setuptools-caller>", line 34, in <module>
            File "/tmp/pip-install-qybufuhd/mysqlclient_ad587186f3304bbba8c6f9984564fb73/setup.py", line 15, in <module>
              metadata, options = get_config()
            File "/tmp/pip-install-qybufuhd/mysqlclient_ad587186f3304bbba8c6f9984564fb73/setup_posix.py", line 70, in get_config
              libs = mysql_config("libs")
            File "/tmp/pip-install-qybufuhd/mysqlclient_ad587186f3304bbba8c6f9984564fb73/setup_posix.py", line 31, in mysql_config
              raise OSError("{} not found".format(_mysql_config_path))
          OSError: mysql_config not found
          mysql_config --version
          mariadb_config --version
          mysql_config --libs
          [end of output]
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.
    WARNING: There was an error checking the latest version of pip.
    
    
    
    opened by rohan-paul 2
  • 'Thread' object has no attribute 'isAlive'

    'Thread' object has no attribute 'isAlive'

    Hi!

    In "/usr/local/lib/python3.9/site-packages/pattern/web/init.py" we have isAlive() in line 224. Running the asynchronous requests example from pattern web it throws:

    Traceback (most recent call last): File "/Users/eyoshi/Python/Pattern/pattern_web_example.py", line 20, in while not request.done: File "/usr/local/lib/python3.9/site-packages/pattern/web/init.py", line 224, in done return not self._thread.isAlive() AttributeError: 'Thread' object has no attribute 'isAlive'

    isAlive needs to be changed to is_alive() here.

    opened by EBoiSha 0
  • License Type issue

    License Type issue

    Hii , This lib uses mysqlclient which is licensed under GPL. And according to GPL rules we cannot license our software under anyother license, if we use GPL code . So basically we need to either remove mysqlclient or replace BSD3 to GPL license

    opened by rsinda 0
  • Unexpected StopIteration exception being raised.

    Unexpected StopIteration exception being raised.

    I downloaded pattern module using pip. Then, when I try to run the example given in readme file, a StopIteration is being raised. ` (ProjectIM) PS C:\Users\Sourav Kannantha B\Documents\ProjectIM\go bot> py .\pattern_ex.py Traceback (most recent call last): File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init_.py", line 609, in _read raise StopIteration StopIteration

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\go bot\pattern_ex.py", line 11, in v = tag(s) File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text\en_init_.py", line 188, in tag for sentence in parse(s, tokenize, True, False, False, False, encoding, **kwargs).split(): File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text\en_init_.py", line 169, in parse return parser.parse(s, *args, **kwargs) File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init_.py", line 1172, in parse s[i] = self.find_tags(s[i], **kwargs) File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text\en_init_.py", line 114, in find_tags return Parser.find_tags(self, tokens, **kwargs) File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init.py", line 1113, in find_tags lexicon = kwargs.get("lexicon", self.lexicon or {}), File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init_.py", line 376, in len return self.lazy("len") File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init.py", line 368, in lazy self.load() File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init.py", line 625, in load dict.update(self, (x.split(" ")[:2] for x in _read(self.path) if len(x.split(" ")) > 1)) File "C:\Users\Sourav Kannantha B\Documents\ProjectIM\lib\site-packages\pattern\text_init.py", line 625, in dict.update(self, (x.split(" ")[:2] for x in _read(self._path) if len(x.split(" ")) > 1)) RuntimeError: generator raised StopIteration `

    opened by SouravKB 1
Releases(3.7-beta)
Owner
Computational Linguistics Research Group
Computational Linguistics and Psycholinguistics Research Center, University of Antwerp
Computational Linguistics Research Group
This repository contains helper functions which can help you generate additional data points depending on your NLP task.

NLP Albumentations For Data Augmentation This repository contains helper functions which can help you generate additional data points depending on you

Aflah 6 May 22, 2022
原神抽卡记录数据集-Genshin Impact gacha data

提要 持续收集原神抽卡记录中 可以使用抽卡记录导出工具导出抽卡记录的json,将json文件发送至[email protected],我会在清除个人信息后

117 Dec 27, 2022
Knowledge Oriented Programming Language

KoPL: 面向知识的推理问答编程语言 安装 | 快速开始 | 文档 KoPL全称 Knowledge oriented Programing Language, 是一个为复杂推理问答而设计的编程语言。我们可以将自然语言问题表示为由基本函数组合而成的KoPL程序,程序运行的结果就是问题的答案。目前,

THU-KEG 62 Dec 12, 2022
DVC-NLP-Simple-usecase

dvc-NLP-simple-usecase DVC NLP project Reference repository: official reference repo DVC STUDIO MY View Bag of Words- Krish Naik TF-IDF- Krish Naik ST

SUNNY BHAVEEN CHANDRA 2 Oct 02, 2022
Tevatron is a simple and efficient toolkit for training and running dense retrievers with deep language models.

Tevatron Tevatron is a simple and efficient toolkit for training and running dense retrievers with deep language models. The toolkit has a modularized

texttron 193 Jan 04, 2023
Google and Stanford University released a new pre-trained model called ELECTRA

Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For furth

Yiming Cui 1.2k Dec 30, 2022
[KBS] Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks

#Sentic GCN Introduction This repository was used in our paper: Aspect-Based Sentiment Analysis via Affective Knowledge Enhanced Graph Convolutional N

Akuchi 35 Nov 16, 2022
AMUSE - financial summarization

AMUSE AMUSE - financial summarization Unzip data.zip Train new model: python FinAnalyze.py --task train --start 0 --count how many files,-1 for all

1 Jan 11, 2022
Transformers and related deep network architectures are summarized and implemented here.

Transformers: from NLP to CV This is a practical introduction to Transformers from Natural Language Processing (NLP) to Computer Vision (CV) Introduct

Ibrahim Sobh 138 Dec 27, 2022
GAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

GAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training Code and model from our AAAI 2021 paper

Amazon Web Services - Labs 83 Jan 09, 2023
Yet Another Compiler Visualizer

yacv: Yet Another Compiler Visualizer yacv is a tool for visualizing various aspects of typical LL(1) and LR parsers. Check out demo on YouTube to see

Ashutosh Sathe 129 Dec 17, 2022
Trains an OpenNMT PyTorch model and SentencePiece tokenizer.

Trains an OpenNMT PyTorch model and SentencePiece tokenizer. Designed for use with Argos Translate and LibreTranslate.

Argos Open Tech 61 Dec 13, 2022
Continuously update some NLP practice based on different tasks.

NLP_practice We will continuously update some NLP practice based on different tasks. prerequisites Software pytorch = 1.10 torchtext = 0.11.0 sklear

0 Jan 05, 2022
Code for Discovering Topics in Long-tailed Corpora with Causal Intervention.

Code for Discovering Topics in Long-tailed Corpora with Causal Intervention ACL2021 Findings Usage 0. Prepare environment Requirements: python==3.6 te

Xiaobao Wu 8 Dec 16, 2022
An easier way to build neural search on the cloud

An easier way to build neural search on the cloud Jina is a deep learning-powered search framework for building cross-/multi-modal search systems (e.g

Jina AI 17.1k Jan 09, 2023
SentimentArcs: a large ensemble of dozens of sentiment analysis models to analyze emotion in text over time

SentimentArcs - Emotion in Text An end-to-end pipeline based on Jupyter notebooks to detect, extract, process and anlayze emotion over time in text. E

jon_chun 14 Dec 19, 2022
Code examples for my Write Better Python Code series on YouTube.

Write Better Python Code This repository contains the code examples used in my Write Better Python Code series published on YouTube: https:/

858 Dec 29, 2022
Text Normalization(文本正则化)

Text Normalization(文本正则化) 任务描述:通过机器学习算法将英文文本的“手写”形式转换成“口语“形式,例如“6ft”转换成“six feet”等 实验结果 XGBoost + bag-of-words: 0.99159 XGBoost+Weights+rules:0.99002

Jason_Zhang 0 Feb 26, 2022
[AAAI 21] Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning

◥ Curriculum Labeling ◣ Revisiting Pseudo-Labeling for Semi-Supervised Learning Paola Cascante-Bonilla, Fuwen Tan, Yanjun Qi, Vicente Ordonez. In the

UVA Computer Vision 113 Dec 15, 2022