Python wrapper for Stanford CoreNLP.

Overview

stanfordcorenlp

PyPI GitHub release PyPI - Python Version

stanfordcorenlp is a Python wrapper for Stanford CoreNLP. It provides a simple API for text processing tasks such as Tokenization, Part of Speech Tagging, Named Entity Reconigtion, Constituency Parsing, Dependency Parsing, and more.

Prerequisites

Java 1.8+ (Check with command: java -version) (Download Page)

Stanford CoreNLP (Download Page)

Py Version CoreNLP Version
v3.7.0.1 v3.7.0.2 CoreNLP 3.7.0
v3.8.0.1 CoreNLP 3.8.0
v3.9.1.1 CoreNLP 3.9.1

Installation

pip install stanfordcorenlp

Example

Simple Usage

# Simple usage
from stanfordcorenlp import StanfordCoreNLP

nlp = StanfordCoreNLP(r'G:\JavaLibraries\stanford-corenlp-full-2018-02-27')

sentence = 'Guangdong University of Foreign Studies is located in Guangzhou.'
print 'Tokenize:', nlp.word_tokenize(sentence)
print 'Part of Speech:', nlp.pos_tag(sentence)
print 'Named Entities:', nlp.ner(sentence)
print 'Constituency Parsing:', nlp.parse(sentence)
print 'Dependency Parsing:', nlp.dependency_parse(sentence)

nlp.close() # Do not forget to close! The backend server will consume a lot memery.

Output format:

# Tokenize
[u'Guangdong', u'University', u'of', u'Foreign', u'Studies', u'is', u'located', u'in', u'Guangzhou', u'.']

# Part of Speech
[(u'Guangdong', u'NNP'), (u'University', u'NNP'), (u'of', u'IN'), (u'Foreign', u'NNP'), (u'Studies', u'NNPS'), (u'is', u'VBZ'), (u'located', u'JJ'), (u'in', u'IN'), (u'Guangzhou', u'NNP'), (u'.', u'.')]

# Named Entities
 [(u'Guangdong', u'ORGANIZATION'), (u'University', u'ORGANIZATION'), (u'of', u'ORGANIZATION'), (u'Foreign', u'ORGANIZATION'), (u'Studies', u'ORGANIZATION'), (u'is', u'O'), (u'located', u'O'), (u'in', u'O'), (u'Guangzhou', u'LOCATION'), (u'.', u'O')]

# Constituency Parsing
 (ROOT
  (S
    (NP
      (NP (NNP Guangdong) (NNP University))
      (PP (IN of)
        (NP (NNP Foreign) (NNPS Studies))))
    (VP (VBZ is)
      (ADJP (JJ located)
        (PP (IN in)
          (NP (NNP Guangzhou)))))
    (. .)))

# Dependency Parsing
[(u'ROOT', 0, 7), (u'compound', 2, 1), (u'nsubjpass', 7, 2), (u'case', 5, 3), (u'compound', 5, 4), (u'nmod', 2, 5), (u'auxpass', 7, 6), (u'case', 9, 8), (u'nmod', 7, 9), (u'punct', 7, 10)]

Other Human Languages Support

Note: you must download an additional model file and place it in the .../stanford-corenlp-full-2018-02-27 folder. For example, you should download the stanford-chinese-corenlp-2018-02-27-models.jar file if you want to process Chinese.

# _*_coding:utf-8_*_

# Other human languages support, e.g. Chinese
sentence = '清华大学位于北京。'

with StanfordCoreNLP(r'G:\JavaLibraries\stanford-corenlp-full-2018-02-27', lang='zh') as nlp:
    print(nlp.word_tokenize(sentence))
    print(nlp.pos_tag(sentence))
    print(nlp.ner(sentence))
    print(nlp.parse(sentence))
    print(nlp.dependency_parse(sentence))

General Stanford CoreNLP API

Since this will load all the models which require more memory, initialize the server with more memory. 8GB is recommended.

 # General json output
nlp = StanfordCoreNLP(r'path_to_corenlp', memory='8g')
print nlp.annotate(sentence)
nlp.close()

You can specify properties:

  • annotators: tokenize, ssplit, pos, lemma, ner, parse, depparse, dcoref (See Detail)

  • pipelineLanguage: en, zh, ar, fr, de, es (English, Chinese, Arabic, French, German, Spanish) (See Annotator Support Detail)

  • outputFormat: json, xml, text

text = 'Guangdong University of Foreign Studies is located in Guangzhou. ' \
       'GDUFS is active in a full range of international cooperation and exchanges in education. '

props={'annotators': 'tokenize,ssplit,pos','pipelineLanguage':'en','outputFormat':'xml'}
print nlp.annotate(text, properties=props)
nlp.close()

Use an Existing Server

Start a CoreNLP Server with command:

java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 15000

And then:

# Use an existing server
nlp = StanfordCoreNLP('http://localhost', port=9000)

Debug

import logging
from stanfordcorenlp import StanfordCoreNLP

# Debug the wrapper
nlp = StanfordCoreNLP(r'path_or_host', logging_level=logging.DEBUG)

# Check more info from the CoreNLP Server 
nlp = StanfordCoreNLP(r'path_or_host', quiet=False, logging_level=logging.DEBUG)
nlp.close()

Build

We use setuptools to package our project. You can build from the latest source code with the following command:

$ python setup.py bdist_wheel --universal

You will see the .whl file under dist directory.

Comments
  •  INFO:root:Waiting until the server is available.

    INFO:root:Waiting until the server is available.

    Hi, I'm trying to use this library for a project where I would need corefence resolution. I was testing out the test.py file and this kept on showing indefinitely, how do I fix this?

    Thanks in advance.

    opened by antoineChammas 13
  • 中文指代消解没有输出

    中文指代消解没有输出

    我在general api里添加dcoref的时候,没有输出结果 3 2

    但是pipeline中去掉dcoref,就会有正确的输出,英文部分并不存在这个问题,我在源码中看到中文和英文的处理代码除了model文件之外几乎没有区别,请问是这个wrapper现在还不支持中文的指代消解吗?

    下面的代码是没有问题的测试代码

    1

    opened by chenjiaxiang 10
  • Problems with NER

    Problems with NER

    When I tried to run the demo code for ner: print 'Named Entities:', nlp.ner(sentence) Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/pypy2.7/dist-packages/stanfordcorenlp/corenlp.py", line 146, in ner r_dict = self._request('ner', sentence) File "/usr/local/lib/pypy2.7/dist-packages/stanfordcorenlp/corenlp.py", line 171, in _request r_dict = json.loads(r.text) File "/usr/lib/pypy/lib-python/2.7/json/init.py", line 347, in loads return _default_decoder.decode(s) File "/usr/lib/pypy/lib-python/2.7/json/decoder.py", line 363, in decode obj, end = self.raw_decode(s, idx=WHITESPACE.match(s, 0).end()) File "/usr/lib/pypy/lib-python/2.7/json/decoder.py", line 381, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded

    Does anyone know how to solve this problem? I use StanfordCoreNLP version 3.7.0

    opened by TwinkleChow 10
  • No JSON object error in test.py

    No JSON object error in test.py

    Hi

    When I ran the test with python 2.7, I got the following error:

    Initializing native server...
    java -Xmx4g -cp "/home/ehsan/Java/JavaLibraries/stanford-corenlp-full-2016-10-31/*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000
    The server is available.
    Traceback (most recent call last):
      File "test.py", line 9, in <module>
        print('Tokenize:', nlp.word_tokenize(sentence))
      File "/home/ehsan/Python/stanford-corenlp/stanfordcorenlp/corenlp.py", line 78, in word_tokenize
        r_dict = self._request('ssplit,tokenize', sentence)
      File "/home/ehsan/Python/stanford-corenlp/stanfordcorenlp/corenlp.py", line 114, in _request
        r_dict = json.loads(r.text)
      File "/home/ehsan/anaconda3/envs/py27-test-corenlp/lib/python2.7/json/__init__.py", line 339, in loads
        return _default_decoder.decode(s)
      File "/home/ehsan/anaconda3/envs/py27-test-corenlp/lib/python2.7/json/decoder.py", line 364, in decode
        obj, end = self.raw_decode(s, idx=_w(s, 0).end())
      File "/home/ehsan/anaconda3/envs/py27-test-corenlp/lib/python2.7/json/decoder.py", line 382, in raw_decode
        raise ValueError("No JSON object could be decoded")
    ValueError: No JSON object could be decoded
    
    opened by ehsanmok 10
  • JSONDecoderError: Expecting Value line 1 column 1

    JSONDecoderError: Expecting Value line 1 column 1

    I was trying to do NER with my text

    from stanfordcorenlp import StanfordCoreNLP

    documents = pd.read_csv('some csv file')['documents'].values.tolist() text = documents[0] ## test is a string

    nlp = StanfordCoreNLP('my_path\stanford-corenlp-full-2018-02-27') ## latest version print(nlp.ner(text))

    But I keep getting this error

    raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

    opened by yceny 6
  • FileNotFoundError

    FileNotFoundError

    I'm getting a file not found error as shown below:

    Traceback (most recent call last): File "D:/Users/[user]/Documents/NLP/arabic_tagger/build_model.py", line 6, in <module> nlp = StanfordCoreNLP(corenlp_path, lang='ar', memory='4g') File "C:\Users\[user]\AppData\Roaming\Python\Python36\site-packages\stanfordcorenlp\corenlp.py", line 46, in __init__ if not subprocess.call(['java', '-version'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) == 0: File "C:\Users\[user]\AppData\Local\Programs\Python\Python36-32\lib\subprocess.py", line 267, in call with Popen(*popenargs, **kwargs) as p: File "C:\Users\[user]\AppData\Local\Programs\Python\Python36-32\lib\subprocess.py", line 709, in __init__ restore_signals, start_new_session) File "C:\Users\[user]\AppData\Local\Programs\Python\Python36-32\lib\subprocess.py", line 997, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified

    I have used this wrapper before and am using it in the same way as always:

    corenlp_path = 'D:/Users/[user]/Desktop/StanfordCoreNLP/Full_CoreNLP_3.8.0' nlp = StanfordCoreNLP(corenlp_path, lang='ar', memory='4g')

    Just to be sure, I downloaded version 3.8.0 as well as the Arabic models and made sure they are in the path specified. I'm wondering if the FileNotFoundError is not referring to the CoreNLP path but something else... subprocess.py is in the correct directory. So yeah... not sure what's wrong/what to do.

    Thanks!

    opened by mjrinker 5
  • Batch processing

    Batch processing

    It is a great wrapper. Can you make it run as a batch process as it is too slow to run this each time for a new sentence? I need to make it dependency parse several sentences within seconds. Please look into the issue.

    opened by Subh1m 5
  • How to use stanfordcorenlp to replace all pronouns in a sentence with the nouns

    How to use stanfordcorenlp to replace all pronouns in a sentence with the nouns

    How can I use standfordcorenlp to replace all pronouns with their nouns in a sentence. For example the sentence is : Fred Rogers lives in a house with pets. It is two stories, and he has a dog, a cat, a rabbit, three goldfish, and a monkey.

    This needs to be converted to : Fred Rogers lives in a house with pets. House is two stories, and Fred Rogers has a dog, a cat, a rabbit, three goldfish, and a monkey.

    I had used the below code nlp = StanfordCoreNLP(r'G:\JavaLibraries\stanford-corenlp-full-2017-06-09', quiet=False) props = {'annotators': 'coref', 'pipelineLanguage': 'en'}

    text = 'Barack Obama was born in Hawaii. He is the president. Obama was elected in 2008.' result = json.loads(nlp.annotate(text, properties=props))

    mentions = result['corefs'].items()

    Although, I cannot understand how to read through mentions and perform what I want to do.

    opened by manavpolikara 4
  • Connecting to a unavailable server doesn't throw exception

    Connecting to a unavailable server doesn't throw exception

    When trying to connect to a existing server that is unavailable the code throws no exception and becomes unresponsive while trying to annotate leading to timeout errors.

    opened by GanadiniAkshay 3
  • How to extract relations from stanford annotatedText?

    How to extract relations from stanford annotatedText?

    I am able to annotate text successfully by using stanford-corenlp as follows

    nlp = StanfordCoreNLP('http://localhost', port=9000)
    sentence = '''Michael James editor of Publishers Weekly,
                     Bill Gates is the owner of Microsoft,
                     Obama is the owner of Microsoft,
                     Satish lives in Hyderabad'''
    
    props={'annotators': 'tokenize, ssplit, pos, lemma, ner, regexner,coref',
           'regexner.mapping':'training.txt','pipelineLanguage':'en'}
    
    annotatedText = json.loads(nlp.annotate(sentence, properties=props))
    

    I am trying to get the relation of annotatedText, but it returns nothing

    roles = """
        (.*(                   
        analyst|
        owner|
        lives|
        editor|
        librarian).*)|
        researcher|
        spokes(wo)?man|
        writer|
        ,\sof\sthe?\s*  # "X, of (the) Y"
        """
    ROLES = re.compile(roles, re.VERBOSE)
    for rel in nltk.sem.extract_rels('PERSON', 'ORGANIZATION', annotatedText,corpus='ace', pattern = ROLES):
    print(nltk.sem.rtuple(rel))
    
    

    can you please help how to extract the relation from Stanford annotated text using NLTK nltk.sem.extract_rels

    opened by satishkumarkt 3
  • AttributeError: 'NoneType' object has no attribute 'PROCFS_PATH

    AttributeError: 'NoneType' object has no attribute 'PROCFS_PATH

    Exception ignored in: <bound method StanfordCoreNLP.del of <stanfordcorenlp.corenlp.StanfordCoreNLP object at 0x7f5d1e4856d8>> Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/stanfordcorenlp/corenlp.py", line 111, in del File "/usr/lib/python3/dist-packages/psutil/init.py", line 349, in init File "/usr/lib/python3/dist-packages/psutil/init.py", line 370, in _init File "/usr/lib/python3/dist-packages/psutil/_pslinux.py", line 849, in init File "/usr/lib/python3/dist-packages/psutil/_pslinux.py", line 151, in get_procfs_path AttributeError: 'NoneType' object has no attribute 'PROCFS_PATH AttributeError: 'NoneType' object has no attribute 'PROCFS_PATH

    opened by yzho0907 3
  • Unable to print word_tokenize Chinese word

    Unable to print word_tokenize Chinese word

    I used the example code:

    from stanfordcorenlp import StanfordCoreNLP
    
    # Other human languages support, e.g. Chinese
    sentence = '清华大学位于北京。'
    
    with StanfordCoreNLP(r'install_packages/stanford-corenlp-full-2016-10-31', lang='zh') as nlp:
        print(nlp.word_tokenize(sentence))
        print(nlp.pos_tag(sentence))
        print(nlp.ner(sentence))
        print(nlp.parse(sentence))
        print(nlp.dependency_parse(sentence))
    

    And my output is:

    ['', '', '', '', '']
    [('', 'NR'), ('', 'NN'), ('', 'VV'), ('', 'NR'), ('', 'PU')]
    [('', 'ORGANIZATION'), ('', 'ORGANIZATION'), ('', 'O'), ('', 'GPE'), ('', 'O')]
    (ROOT
      (IP
        (NP (NR 清华) (NN 大学))
        (VP (VV 位于)
          (NP (NR 北京)))
        (PU 。)))
    [('ROOT', 0, 3), ('compound:nn', 2, 1), ('nsubj', 3, 2), ('dobj', 3, 4), ('punct', 3, 5)]
    

    It seems that parse code is able to print correct result, but other codes can't. I don't know why this happened.

    opened by PolarisRisingWar 0
  • 'json.decoder.JSONDecodeError' Error

    'json.decoder.JSONDecodeError' Error

    As the lastest version of Chinese language package met the 'json.decoder.JSONDecodeError', which is caused by http request error :[500], the fuction _request should be updated as(change the annotators conf) :

    def _request(self, annotators=None, data=None, *args, **kwargs): if sys.version_info.major >= 3: data = data.encode('utf-8') if annotators: annotators = 'tokenize,ssplit' + ',' + annotators # NEW language package({version} > 3.9.1.1) API cmd must inintial with these two annototars:tokenize,ssplit properties = {'annotators': annotators, 'outputFormat': 'json'} params = {'properties': str(properties), 'pipelineLanguage': self.lang} if 'pattern' in kwargs: params = {"pattern": kwargs['pattern'], 'properties': str(properties), 'pipelineLanguage': self.lang}

        logging.info(params)
        r = requests.post(self.url, params=params, data=data, headers={'Connection': 'close'})
        print(self.url, r)
        r_dict = json.loads(r.text)
    
        return r_dict
    
    opened by mvllwong 0
  • how to extract data anotator

    how to extract data anotator

    How can I handle error. If there is error like this I'm using like this syntax

    after several trial. I have notice, to load json using this function from this issue

    import nltk, json, pycorenlp, stanfordcorenlp
    from clause import clauseSentence
    
    url_stanford = 'http://corenlp.run'
    model_stanford = "stanford-corenlp-4.0.0"
    try:
      nlp = pycorenlp.StanfordCoreNLP(url_stanford) # pycorenlp
    except:
      nlp = stanfordcorenlp.StanfordCoreNLP(model_stanford) # stanford nlp
    
    props={"annotators":"parse","outputFormat": "json"}
    sent = 'The product shall plot the data points in a scientifically correct manner'
    dataform = nlp.annotate(sent, properties=props)
    try:
      dt = clauseSentence().print_clauses(dataform['sentences'][0]['parse'])
    except:
      parser = json.loads(dataform)
      dt = clauseSentence().print_clauses(parser['sentences'][0]['parse'])
    print(dt)
    

    but there is problem that result this one. JSONDecodeError: Expecting value: line 1 column 1 (char 0)

    opened by asyrofist 0
  • error with

    error with "%"

    when I use this tool to analyse a text which include "%", it raise error like this: image

    and I fix it by changing the post code in corenlp.py like this: image

    opened by liuxin99 0
  • How to use 4class NER classifier instead of default classifier

    How to use 4class NER classifier instead of default classifier

    Hi, I am trying to use 4class NER classifier but not sure how to do this. I tried to search online and found a post but unable to do this in the above Python package. Any help will be greatly appreciated. I tried to follow this link below but not successful. https://stackoverflow.com/questions/31711995/how-to-load-a-specific-classifier-in-stanfordcorenlp

    opened by KRSTD 0
Releases(v3.9.1.1)
Secret messaging app which you can use to communicate with your friends by encrypting / decrypting secret messages or sending secret message through mail.

Secret-Whisper A Secret messaging app which you can use to communicate with your friends by encrypting / decrypting secret messages 🤫 or sending secr

3 Jan 01, 2022
Riffdog Terraform scanner - finding 'things' in the Real World (aka AWS) which Terraform didn't put there.

riffdog Riffdog Terraform / Reality scanner - finding 'things' in the Real World which Terraform didn't put there. This project works by firstly loadi

Riffdog 4 Mar 23, 2020
数字货币动态趋势网格,随着行情变动。目前实盘月化10%。目前支持币安,未来上线火币、OKEX。

数字货币动态趋势网格,随着行情变动。目前实盘月化10%。目前支持币安,未来上线火币、OKEX。

幸福村的码农 98 Dec 27, 2022
UP It is a script to notify of a new update of your project, done in python and using GitHub, to modify the versions to notify users.

UP-Updater UP It is a script to notify of a new update of your project, done in python and using GitHub, to modify the versions to notify users. Requi

Made in 4 Oct 28, 2021
ResolveURL - Fork of UrlResolver by eldorados, tknorris and jsergio123

ResolveURL Fork of UrlResolver by eldorados, tknorris and jsergio123 I am in no

gujal 60 Jan 03, 2023
Python SDK for accessing the Hanko Authentication API

Hanko Authentication SDK for Python This package is maintained by Hanko. Contents Introduction Documentation Installation Usage Prerequisites Create a

Hanko.io 3 Mar 08, 2022
A Python client for the Softcite software mention recognizer server

Softcite software mention recognizer client Python client for using the Softcite software mention recognition service. It can be applied to individual

4 Feb 02, 2022
Example code for interacting with solana anchor programs - candymachine

candypy example code for interacting with solana anchor programs - candymachine THIS IS PURELY SAMPLE CODE TO FORK, MODIFY, UNDERSTAND AND INTERACT WI

dubbelosix 3 Sep 18, 2022
Best Buy Bot used to add products to cart for purchase.

To Install the Best Buy Bot These instructions are for Mac users only. Clone this Repo to your machine. BestBuyBot Open in VScode. Is Python installed

Robert Estrella 1 Dec 11, 2021
An NFTGenerator to generate NFTs and send them to nft.storage

NFTGenerator Table of Contents Overview Installation Introduction Features Reflection Issues & bug reports Show your support Credits Overview The NFTG

3 Mar 14, 2022
This project is a basic login system in terminal for Discord

Welcome to Discord Login System(Terminal) 👋 This project is a basic login system in terminal for Discord Author 👤 arukovic Github: @SONIC-CODEZ Show

SONIC-CODEZ 2 Feb 11, 2022
Automate HoYoLAB Genshin Daily Check-In Using Github Actions

Genshin Daily Check-In 🤖 Automate HoYoLAB Daily Check-In Using Github Actions KOR, ENG Instructions Fork the repository Go to Settings - Secrets Cli

Leo Kim 41 Jun 24, 2022
This bot will send you an email or notify you via telegram & discord if dolar/lira parity breaks a record.

Dolar Rekor Kırdı Mı? This bot will send you an email or notify you via Telegram & Discord if Dolar/Lira parity breaks a record. Mailgun can be used a

Yiğit Göktuğ Budanur 2 Oct 14, 2021
DoriBot -Discord Chat Bot

DoriBot -Discord Chat Bot Please do not use these source files for commercial use. Be sure to mark the source. 이제 더이상의 메이저 업데이트는 없습니다. 마이너 업데이트들은 존재합니

queenanna1999 0 Mar 30, 2022
"zpool iostats" for humans; find the slow parts of your ZFS pool

Getting the gist of zfs statistics vpool-demo.mp4 The ZFS command "zpool iostat" provides a histogram listing of how often it takes to do things in pa

Chad 57 Oct 24, 2022
Python module and command line script client for http://urbandictionary.com

py-urbandict py-urbandict is a client for urbandictionary.com. Project page on github: https://github.com/novel/py-urbandict PyPI: https://pypi.org/pr

Roman Bogorodskiy 32 Oct 01, 2022
Implement backup and recovery with AWS Backup across your AWS Organizations using a CI/CD pipeline (AWS CodePipeline).

Backup and Recovery with AWS Backup This repository provides you with a management and deployment solution for implementing Backup and Recovery with A

AWS Samples 8 Nov 22, 2022
Bot playing "mathbattle" game from Telegram messenger

mathbattlebot Bot playing mathbattle game from Telegram messenger Installing: run in command line pip3 install -r requirements.txt Running: Example c

Egor 1 May 30, 2022
A Bot Upload file|video To Telegram using given Links.

A Bot Upload file|video To Telegram using given Links.

Hash Minner 19 Jan 02, 2023
Library written in Python that wraps Halo Infinite API.

haloinfinite Library written in Python that wraps Halo Infinite API. Before start It's unofficial, reverse-engineered, neither stable nor production r

Miguel Ferrer 4 Dec 28, 2022