A Japanese tokenizer based on recurrent neural networks

Overview


Codacy Badge Build Status Build status Coverage Status Documentation Status PyPI Downloads

Nagisa is a python module for Japanese word segmentation/POS-tagging. It is designed to be a simple and easy-to-use tool.

This tool has the following features.

  • Based on recurrent neural networks.
  • The word segmentation model uses character- and word-level features [池田+].
  • The POS-tagging model uses tag dictionary information [Inoue+].

For more details refer to the following links.

  • The slides at PyCon JP 2019 is available here.
  • The article in Japanese is available here.
  • The documentation is available here.

Installation

Python 2.7.x or 3.5+ is required. This tool uses DyNet (the Dynamic Neural Network Toolkit) to calcucate neural networks. You can install nagisa by using the following command.

pip install nagisa

For Windows users, please run it with python 3.6 or 3.7 (64bit).

Basic usage

Sample of word segmentation and POS-tagging for Japanese.

import nagisa

text = 'Pythonで簡単に使えるツールです'
words = nagisa.tagging(text)
print(words)
#=> Python/名詞 で/助詞 簡単/形状詞 に/助動詞 使える/動詞 ツール/名詞 です/助動詞

# Get a list of words
print(words.words)
#=> ['Python', 'で', '簡単', 'に', '使える', 'ツール', 'です']

# Get a list of POS-tags
print(words.postags)
#=> ['名詞', '助詞', '形状詞', '助動詞', '動詞', '名詞', '助動詞']

Post-processing functions

Filter and extarct words by the specific POS tags.

# Filter the words of the specific POS tags.
words = nagisa.filter(text, filter_postags=['助詞', '助動詞'])
print(words)
#=> Python/名詞 簡単/形状詞 使える/動詞 ツール/名詞

# Extarct only nouns.
words = nagisa.extract(text, extract_postags=['名詞'])
print(words)
#=> Python/名詞 ツール/名詞

# This is a list of available POS-tags in nagisa.
print(nagisa.tagger.postags)
#=> ['補助記号', '名詞', ... , 'URL']

Add the user dictionary in easy way.

# default
text = "3月に見た「3月のライオン」"
print(nagisa.tagging(text))
#=> 3/名詞 月/名詞 に/助詞 見/動詞 た/助動詞 「/補助記号 3/名詞 月/名詞 の/助詞 ライオン/名詞 」/補助記号

# If a word ("3月のライオン") is included in the single_word_list, it is recognized as a single word.
new_tagger = nagisa.Tagger(single_word_list=['3月のライオン'])
print(new_tagger.tagging(text))
#=> 3/名詞 月/名詞 に/助詞 見/動詞 た/助動詞 「/補助記号 3月のライオン/名詞 」/補助記号

Train a model

Nagisa (v0.2.0+) provides a simple train method for a joint word segmentation and sequence labeling (e.g, POS-tagging, NER) model.

The format of the train/dev/test files is tsv. Each line is word and tag and one line is represented by word \t(tab) tag. Note that you put EOS between sentences. Refer to sample datasets and tutorial (Train a model for Universal Dependencies).

$ cat sample.train
唯一	NOUN
の	ADP
趣味	NOU
は	ADP
料理	NOUN
EOS
とても	ADV
おいしかっ	ADJ
た	AUX
です	AUX
。	PUNCT
EOS
ドル	NOUN
は	ADP
主要	ADJ
通貨	NOUN
EOS
# After finish training, save the three model files (*.vocabs, *.params, *.hp).
nagisa.fit(train_file="sample.train", dev_file="sample.dev", test_file="sample.test", model_name="sample")

# Build the tagger by loading the trained model files.
sample_tagger = nagisa.Tagger(vocabs='sample.vocabs', params='sample.params', hp='sample.hp')

text = "福岡・博多の観光情報"
words = sample_tagger.tagging(text)
print(words)
#> 福岡/PROPN ・/SYM 博多/PROPN の/ADP 観光/NOUN 情報/NOUN
Comments
  • Heroku deployment of NLP model Nagisa Tokenizer showing error

    Heroku deployment of NLP model Nagisa Tokenizer showing error

    Hi, I deployed my Flask App ( NLP model ) on Heroku. I was basically a price prediction model where some columns were in Japanese where I applied NLP + Nagisa Library for tokenization and some columns were numerical data. I pickled vectorizers and the model and Finally added them to my Flask API. But after deployment when I added the values in the frontend and clicked on Predict button, the result is not getting displayed. This is the exact error I am facing. image The exact code of Tokenizer_jp is : def tokenize_jp(doc): doc = nagisa.tagging(doc) return doc.words

    I am not able to figure out how to fix this? does Nagisa work in Heroku deployment? PS: I am not really sure if the problem is with Heroku or Nagisa, please help me with this.

    opened by Pranjal-bisht 22
  • AttributeError: module 'utils' has no attribute 'OOV'

    AttributeError: module 'utils' has no attribute 'OOV'

    Hi, I got error in 'import nagisa' as below

    OOV = utils.OOV AttributeError: module 'utils' has no attribute 'OOV'

    I did 'pip install nagisa' on the conda envrionment python 3.7 and 3.6 I ran it on my Mac.

    opened by RonenHong 15
  • Pip/pip3 install nagisa Error

    Pip/pip3 install nagisa Error

    Hello @taishi-i when i am trying to pip install nagisa getting below error. I tried to install through conda.

    Windows7 C:\Users\SAIKIRAN>python --version Python 3.8.3

    Error: ERROR: Command errored out with exit status 1: 'c:\users\saikiran\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0 = '"'"'C:\Users\SAIKIRAN\AppData\Local\Temp\pip-install-a31d0hp1\DyNet\setup.py'"'"'; file='"'"'C:\Users\SAIKIRAN\AppData\Local\Temp\pip-install-a31d0 1\DyNet\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, ' "'exec'"'"'))' install --record 'C:\Users\SAIKIRAN\AppData\Local\Temp\pip-record-mg2btvbb\install-record.txt' --single-version-externally-managed --compile Check the lo for full command output.

    opened by ssaikiran123 14
  • Wheel request for Python 3.8

    Wheel request for Python 3.8

    Hello, thank you for maintaining the awesome toolkit!

    I think we cannot install nagisa by pip install nagisa on Python>=3.8. This is because:

    • (a) dynet uses the old URL for eigen (https://github.com/clab/dynet/issues/1616). Some commits (e.g. https://github.com/clab/dynet/commit/b800ed0f4c48f234bceaf9fa3d61974cef3e0029) were pushed for this problem but no-release including them is available.
    • (b) nagisa doesn't provide wheel for the latest versions of Python. If someone wants to install nagisa on Python<=3.7, it works well as wheels are uploaded to https://pypi.org/project/nagisa/#files. However, for Python>=3.8, pip will install nagisa from the source. This may not work well because of the problem (a).

    The full output of pip install nagisa on Python3.8: https://gist.github.com/himkt/1bc75b83f1735535c4df0b952f352bf6

    opened by himkt 10
  • Improving the handling of numerals of nagisa's word tokenizer

    Improving the handling of numerals of nagisa's word tokenizer

    I'm using nagisa v0.1.1. There's some problems about the tokenizer's handling of numerals, the numbers and decimals are split as single characters and tagged as "名詞" 357 -> 3_名詞 5_名詞 7_名詞 # Numbers 1.48 -> 1_名詞 ._名詞 4_名詞 8_名詞 # Decimals $5.5 -> $_補助記号 5_名詞 ._補助記号 5_名詞 # Numbers with currency symbols (and other symbols) 133-1111-2222 -> 1_名詞 3_名詞 3_名詞 -_補助記号 1_名詞 1_名詞 1_名詞 1_名詞 -_補助記号 2_名詞 2_名詞 2_名詞 2_名詞 # Phone numbers

    and etc... Is it possible to improve this?

    opened by BLKSerene 4
  • request: comparison to other tokenizers/PoS taggers

    request: comparison to other tokenizers/PoS taggers

    Could you include some notes briefly comparing this to other parses like Mecab? Mecab includes a comparison to other tokenizers/parsers. I think users would greatly benefit from knowing things like parsing speed comparisons, accuracy, and other slight differences/nuances/use cases.

    opened by SpongebobSquamirez 4
  • error: command 'cl.exe' failed: No such file or directory

    error: command 'cl.exe' failed: No such file or directory

    When I use pip install nagisa to install,the error message is:

    Collecting nagisa Using cached https://files.pythonhosted.org/packages/a1/40/a94f7944ee5d6a4d44eadcc966fe0d46b5155fb139d7b4d708e439617df1/nagisa-0.1.1.tar.gz Requirement already satisfied: six in e:\anaconda3\lib\site-packages (from nagisa) (1.11.0) Requirement already satisfied: numpy in e:\anaconda3\lib\site-packages (from nagisa) (1.14.0) Requirement already satisfied: DyNet in e:\anaconda3\lib\site-packages (from nagisa) (2.1) Requirement already satisfied: cython in e:\anaconda3\lib\site-packages (from DyNet->nagisa) (0.27.3) Building wheels for collected packages: nagisa Running setup.py bdist_wheel for nagisa ... error Complete output from command e:\anaconda3\python.exe -u -c "import setuptools, tokenize;file='C:\Users\test\AppData\Local\Temp\pip-install-t_9_vdzk\nagisa\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d C:\Users\test\AppData\Local\Temp\pip-wheel-dmgx_3eh --python-tag cp36: running bdist_wheel Warning: Extension name 'utils' does not match fully qualified name 'nagisa.utils' of 'nagisa/utils.pyx' running build running build_py creating build creating build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\nagisa copying nagisa\mecab_system_eval.py -> build\lib.win-amd64-3.6\nagisa copying nagisa\model.py -> build\lib.win-amd64-3.6\nagisa copying nagisa\prepro.py -> build\lib.win-amd64-3.6\nagisa copying nagisa\tagger.py -> build\lib.win-amd64-3.6\nagisa copying nagisa\train.py -> build\lib.win-amd64-3.6\nagisa copying nagisa_init_.py -> build\lib.win-amd64-3.6\nagisa running egg_info writing nagisa.egg-info\PKG-INFO writing dependency_links to nagisa.egg-info\dependency_links.txt writing requirements to nagisa.egg-info\requires.txt writing top-level names to nagisa.egg-info\top_level.txt reading manifest file 'nagisa.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'nagisa.egg-info\SOURCES.txt' copying nagisa\utils.c -> build\lib.win-amd64-3.6\nagisa copying nagisa\utils.pyx -> build\lib.win-amd64-3.6\nagisa creating build\lib.win-amd64-3.6\nagisa\data copying nagisa\data\models.jpg -> build\lib.win-amd64-3.6\nagisa\data copying nagisa\data\nagisa_image.jpg -> build\lib.win-amd64-3.6\nagisa\data copying nagisa\data\nagisa_v001.dict -> build\lib.win-amd64-3.6\nagisa\data copying nagisa\data\nagisa_v001.hp -> build\lib.win-amd64-3.6\nagisa\data copying nagisa\data\nagisa_v001.model -> build\lib.win-amd64-3.6\nagisa\data running build_ext building 'utils' extension creating build\temp.win-amd64-3.6 creating build\temp.win-amd64-3.6\Release creating build\temp.win-amd64-3.6\Release\nagisa cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ie:\anaconda3\lib\site-packages\numpy\core\include -Ie:\anaconda3\include -Ie:\anaconda3\include /Tcnagisa/utils.c /Fobuild\temp.win-amd64-3.6\Release\nagisa/utils.obj error: command 'cl.exe' failed: No such file or directory


    Failed building wheel for nagisa Running setup.py clean for nagisa Failed to build nagisa Installing collected packages: nagisa Running setup.py install for nagisa ... error Complete output from command e:\anaconda3\python.exe -u -c "import setuptools, tokenize;file='C:\Users\test\AppData\Local\Temp\pip-install-t_9_vdzk\nagisa\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\test\AppData\Local\Temp\pip-record-p2d6rr5x\install-record.txt --single-version-externally-managed --compile: running install Warning: Extension name 'utils' does not match fully qualified name 'nagisa.utils' of 'nagisa/utils.pyx' running build running build_py creating build creating build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\nagisa copying nagisa\mecab_system_eval.py -> build\lib.win-amd64-3.6\nagisa copying nagisa\model.py -> build\lib.win-amd64-3.6\nagisa copying nagisa\prepro.py -> build\lib.win-amd64-3.6\nagisa copying nagisa\tagger.py -> build\lib.win-amd64-3.6\nagisa copying nagisa\train.py -> build\lib.win-amd64-3.6\nagisa copying nagisa_init_.py -> build\lib.win-amd64-3.6\nagisa running egg_info writing nagisa.egg-info\PKG-INFO writing dependency_links to nagisa.egg-info\dependency_links.txt writing requirements to nagisa.egg-info\requires.txt writing top-level names to nagisa.egg-info\top_level.txt reading manifest file 'nagisa.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'nagisa.egg-info\SOURCES.txt' copying nagisa\utils.c -> build\lib.win-amd64-3.6\nagisa copying nagisa\utils.pyx -> build\lib.win-amd64-3.6\nagisa creating build\lib.win-amd64-3.6\nagisa\data copying nagisa\data\models.jpg -> build\lib.win-amd64-3.6\nagisa\data copying nagisa\data\nagisa_image.jpg -> build\lib.win-amd64-3.6\nagisa\data copying nagisa\data\nagisa_v001.dict -> build\lib.win-amd64-3.6\nagisa\data copying nagisa\data\nagisa_v001.hp -> build\lib.win-amd64-3.6\nagisa\data copying nagisa\data\nagisa_v001.model -> build\lib.win-amd64-3.6\nagisa\data running build_ext building 'utils' extension creating build\temp.win-amd64-3.6 creating build\temp.win-amd64-3.6\Release creating build\temp.win-amd64-3.6\Release\nagisa cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ie:\anaconda3\lib\site-packages\numpy\core\include -Ie:\anaconda3\include -Ie:\anaconda3\include /Tcnagisa/utils.c /Fobuild\temp.win-amd64-3.6\Release\nagisa/utils.obj error: command 'cl.exe' failed: No such file or directory

    ----------------------------------------
    

    Command "e:\anaconda3\python.exe -u -c "import setuptools, tokenize;file='C:\Users\test\AppData\Local\Temp\pip-install-t_9_vdzk\nagisa\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\test\AppData\Local\Temp\pip-record-p2d6rr5x\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\test\AppData\Local\Temp\pip-install-t_9_vdzk\nagisa\

    How to fix it?

    opened by dapsjj 4
  • Drop support for Python2.7?

    Drop support for Python2.7?

    The EOL of Python2.7 is January 1, 2020. As many other major open-source project, is there any plan for a new version of nagisa that will drop support for Python2.7 and support only Python3?

    The Python3-only version could remove the dependency of six and lighten the burden of maintenance work in the future.

    opened by BLKSerene 3
  • Returning a generator instead of a list in nagisa.postagging

    Returning a generator instead of a list in nagisa.postagging

    Hi, I'm trying to figure out how to POS-tag a list of tokens that have already been tokenized and I found #8 , which works fine.

    And I think that returning a generator instead of a list would be better for users, since it will create a long list of POS tags in-memory for a large input text. And in most cases, the returned POS-tags are to be iterated over (usually only once) to be zipped with the tokens.

    Or, you could provide two functions, like postagging and lpostagging, the former one returning a generator and the latter one returning a common list.

    opened by BLKSerene 3
  • Illegal instruction (core dumped)

    Illegal instruction (core dumped)

    Thanks for building this. I've been trying mecab and not been getting the exact results that I need and thought I'd give this a try.

    For now, I have this working on a centos box, but I'm wanting to get this working on ubuntu as it's my main dev machine.

    I keep getting:

    [dynet] random seed: 1234
    [dynet] allocating memory: 32MB
    Illegal instruction (core dumped)
    

    Distributor ID: Ubuntu Description: Ubuntu 20.04 LTS Release: 20.04 Codename: focal

    • Python 3.8.5
    • 8GB laptop.

    Is there any more information you need? Thanks

    opened by paulm17 2
  • Why do you have 6 dim outputs for word segmentation?

    Why do you have 6 dim outputs for word segmentation?

    from https://github.com/taishi-i/nagisa/blob/master/nagisa/model.py#L59, Why do you have 6 DIM outputs for word segmentation? encode_ws has 6 DIM outputs. I understand you using BMES (4 dim first). What are the last two boxes used for? Could you explain that, please?

    Thank you.

    opened by wannaphong 2
  • building nagisa on m1

    building nagisa on m1

    I am facing this issue:

    [notice] To update, run: pip install --upgrade pip
    (venv) [email protected] vocab % pip install nagisa
    Collecting nagisa
      Using cached nagisa-0.2.8.tar.gz (20.9 MB)
      Preparing metadata (setup.py) ... done
    Collecting six
      Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
    Collecting numpy
      Using cached numpy-1.23.4-cp310-cp310-macosx_11_0_arm64.whl (13.3 MB)
    Collecting nagisa
      Using cached nagisa-0.2.7.tar.gz (20.9 MB)
      Preparing metadata (setup.py) ... done
    Collecting DyNet
      Using cached dyNET-2.1.2.tar.gz (509 kB)
      Installing build dependencies ... done
      Getting requirements to build wheel ... done
      Preparing metadata (pyproject.toml) ... done
    Collecting cython
      Using cached Cython-0.29.32-py2.py3-none-any.whl (986 kB)
    Building wheels for collected packages: nagisa, DyNet
      Building wheel for nagisa (setup.py) ... done
      Created wheel for nagisa: filename=nagisa-0.2.7-cp310-cp310-macosx_11_0_arm64.whl size=21306402 sha256=c559ab30293dffc0d1ae36d215725dec08da0910ed1c3331728c398397258d2f
      Stored in directory: /Users/b/Library/Caches/pip/wheels/cf/38/0b/463d99fdf6d3c736cfcb4124124496513831eeefdc7f896391
      Building wheel for DyNet (pyproject.toml) ... error
      error: subprocess-exited-with-error
    
      × Building wheel for DyNet (pyproject.toml) did not run successfully.
      │ exit code: 1
      ╰─> [101 lines of output]
          /private/var/folders/yv/lystpk8n2015cf8vmqd2yj_c0000gp/T/pip-build-env-rvxcggqa/overlay/lib/python3.10/site-packages/setuptools/dist.py:530: UserWarning: Normalizing 'v2.1.2' to '2.1.2'
            warnings.warn(tmpl.format(**locals()))
          /private/var/folders/yv/lystpk8n2015cf8vmqd2yj_c0000gp/T/pip-build-env-rvxcggqa/overlay/lib/python3.10/site-packages/setuptools/dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
            warnings.warn(
          running bdist_wheel
          running build
          INFO:root:CMAKE_PATH='/opt/homebrew/bin/cmake'
          INFO:root:MAKE_PATH='/usr/bin/make'
          INFO:root:MAKE_FLAGS='-j 8'
          INFO:root:EIGEN3_INCLUDE_DIR='/private/var/folders/yv/lystpk8n2015cf8vmqd2yj_c0000gp/T/pip-install-v2h7cwoe/dynet_f6727a54d6ce4c5d83d9578e2d0a272a/build/py3.10-64bit/eigen'
          INFO:root:EIGEN3_DOWNLOAD_URL='https://github.com/clab/dynet/releases/download/2.1/eigen-b2e267dc99d4.zip'
          INFO:root:CC_PATH='/usr/bin/gcc'
          INFO:root:CXX_PATH='/usr/bin/g++'
          INFO:root:SCRIPT_DIR='/private/var/folders/yv/lystpk8n2015cf8vmqd2yj_c0000gp/T/pip-install-v2h7cwoe/dynet_f6727a54d6ce4c5d83d9578e2d0a272a'
          INFO:root:BUILD_DIR='/private/var/folders/yv/lystpk8n2015cf8vmqd2yj_c0000gp/T/pip-install-v2h7cwoe/dynet_f6727a54d6ce4c5d83d9578e2d0a272a/build/py3.10-64bit'
          INFO:root:INSTALL_PREFIX='/Users/b/study/jap/vocab/venv/lib/python3.10/site-packages/../../..'
          INFO:root:PYTHON='/Users/b/study/jap/vocab/venv/bin/python3.10'
          cmake version 3.24.1
    
          CMake suite maintained and supported by Kitware (kitware.com/cmake).
          Apple clang version 13.1.6 (clang-1316.0.21.2.5)
          Target: arm64-apple-darwin21.6.0
          Thread model: posix
          InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
          INFO:root:Creating build directory /private/var/folders/yv/lystpk8n2015cf8vmqd2yj_c0000gp/T/pip-install-v2h7cwoe/dynet_f6727a54d6ce4c5d83d9578e2d0a272a/build/py3.10-64bit
          INFO:root:Fetching Eigen...
          INFO:root:Unpacking Eigen...
          INFO:root:Configuring...
          -- The C compiler identification is AppleClang 13.1.6.13160021
          -- The CXX compiler identification is AppleClang 13.1.6.13160021
          -- Detecting C compiler ABI info
          -- Detecting C compiler ABI info - done
          -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/usr/bin/gcc - skipped
          -- Detecting C compile features
          -- Detecting C compile features - done
          -- Detecting CXX compiler ABI info
          -- Detecting CXX compiler ABI info - done
          -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/usr/bin/g++ - skipped
          -- Detecting CXX compile features
          -- Detecting CXX compile features - done
          CMake Deprecation Warning at CMakeLists.txt:2 (cmake_minimum_required):
            Compatibility with CMake < 2.8.12 will be removed from a future version of
            CMake.
    
            Update the VERSION argument <min> value or use a ...<max> suffix to tell
            CMake that the project does not need compatibility with older versions.
    
    
          -- Optimization level: fast
          -- BACKEND not specified, defaulting to eigen.
          -- Eigen dir is /private/var/folders/yv/lystpk8n2015cf8vmqd2yj_c0000gp/T/pip-install-v2h7cwoe/dynet_f6727a54d6ce4c5d83d9578e2d0a272a/build/py3.10-64bit/eigen
          -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
          -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
          -- Found Threads: TRUE
          -- Found Cython version 0.29.32
    
          CMAKE_INSTALL_PREFIX="/Users/b/study/jap/vocab/venv"
          PROJECT_SOURCE_DIR="/private/var/folders/yv/lystpk8n2015cf8vmqd2yj_c0000gp/T/pip-install-v2h7cwoe/dynet_f6727a54d6ce4c5d83d9578e2d0a272a"
          PROJECT_BINARY_DIR="/private/var/folders/yv/lystpk8n2015cf8vmqd2yj_c0000gp/T/pip-install-v2h7cwoe/dynet_f6727a54d6ce4c5d83d9578e2d0a272a/build/py3.10-64bit"
          LIBS=""
          EIGEN3_INCLUDE_DIR="/private/var/folders/yv/lystpk8n2015cf8vmqd2yj_c0000gp/T/pip-install-v2h7cwoe/dynet_f6727a54d6ce4c5d83d9578e2d0a272a/build/py3.10-64bit/eigen"
          MKL_LINK_DIRS=""
          WITH_CUDA_BACKEND=""
          CUDA_RT_FILES=""
          CUDA_RT_DIRS=""
          CUDA_CUBLAS_FILES=""
          CUDA_CUBLAS_DIRS=""
          MSVC=""
          fatal: not a git repository (or any of the parent directories): .git
          -- Configuring done
          -- Generating done
          -- Build files have been written to: /private/var/folders/yv/lystpk8n2015cf8vmqd2yj_c0000gp/T/pip-install-v2h7cwoe/dynet_f6727a54d6ce4c5d83d9578e2d0a272a/build/py3.10-64bit
          INFO:root:Compiling...
          [  4%] Building CXX object dynet/CMakeFiles/dynet.dir/deep-lstm.cc.o
          [  4%] Building CXX object dynet/CMakeFiles/dynet.dir/exec.cc.o
          [  4%] Building CXX object dynet/CMakeFiles/dynet.dir/aligned-mem-pool.cc.o
          [  5%] Building CXX object dynet/CMakeFiles/dynet.dir/cfsm-builder.cc.o
          [  8%] Building CXX object dynet/CMakeFiles/dynet.dir/dynet.cc.o
          [  8%] Building CXX object dynet/CMakeFiles/dynet.dir/dict.cc.o
          [ 10%] Building CXX object dynet/CMakeFiles/dynet.dir/devices.cc.o
          [ 11%] Building CXX object dynet/CMakeFiles/dynet.dir/dim.cc.o
          clang: error: the clang compiler does not support '-march=native'
          clang: error: the clang compiler does not support '-march=native'
          clang: error: the clang compiler does not support '-march=native'
          clang: error: the clang compiler does not support '-march=native'
          clang: error: the clang compiler does not support '-march=native'
          clang: error: the clang compiler does not support '-march=native'
          make[2]: *** [dynet/CMakeFiles/dynet.dir/devices.cc.o] Error 1
          make[2]: *** Waiting for unfinished jobs....
          make[2]: *** [dynet/CMakeFiles/dynet.dir/aligned-mem-pool.cc.o] Error 1
          make[2]: *** [dynet/CMakeFiles/dynet.dir/dynet.cc.o] Error 1
          make[2]: *** [dynet/CMakeFiles/dynet.dir/cfsm-builder.cc.o] Error 1
          clang: error: the clang compiler does not support '-march=native'
          clang: error: the clang compiler does not support '-march=native'
          make[2]: *** [dynet/CMakeFiles/dynet.dir/dim.cc.o] Error 1
          make[2]: *** [dynet/CMakeFiles/dynet.dir/deep-lstm.cc.o] Error 1
          make[2]: *** [dynet/CMakeFiles/dynet.dir/dict.cc.o] Error 1
          make[2]: *** [dynet/CMakeFiles/dynet.dir/exec.cc.o] Error 1
          make[1]: *** [dynet/CMakeFiles/dynet.dir/all] Error 2
          make: *** [all] Error 2
          error: /usr/bin/make -j 8
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      ERROR: Failed building wheel for DyNet
    Successfully built nagisa
    

    any ideas?

    opened by dataf3l 1
  • core dumped

    core dumped

    I am running manjaro linux on a thinkpad x230, using python 3.9.7 and the version of nagisa from pip. When i run import nagisa i get Illegal instruction (core dumped)

    opened by ryanswilson59 4
  • add cache layer to Tagger

    add cache layer to Tagger

    if instantiating Tagger at function level it will load dictionary every time, if instantiating Tagger at module level it will load dictionary therefore may not actually use refer to https://github.com/fxsjy/jieba/blob/master/jieba/init.py

    opened by bung87 4
Releases(0.2.8)
  • 0.2.8(Sep 9, 2022)

    nagisa 0.2.8 incorporates the following changes:

    When tokenizing a text containing 'İ', an AttributeError has occurred. This is because, as the following example shows, lowering 'İ' would have changed to the length of 2, and would not have been extracting features correctly.

    >>> text = "İ" # [U+0130]
    >>> print(len(text))
    1
    >>> text = text.lower() # [U+0069] [U+0307]
    >>> print(text)
    'i̇'
    >>> print(len(text))
    2
    

    To avoid this error, the following preprocess was added to the source code modification 1, modification 2.

    text = text.replace('İ', 'I')
    
    • Add Python wheels (3.6, 3.7, 3.8, 3.9, 3.10, 3.11) to PyPI for Linux
    • Add Python wheels (3.6, 3.7, 3.8, 3.9, 3.10) to PyPI for macOS
    • Add Python wheels (3.6, 3.7, 3.8) to PyPI for Windows
    Source code(tar.gz)
    Source code(zip)
    nagisa-0.2.8.tar.gz(19.93 MB)
  • 0.2.7(Jul 6, 2020)

    nagisa 0.2.7 incorporates the following changes:

    • Fix AttributeError: module 'utils' to rename utils.pyx into nagisa_utils.pyx #14
    • Add wheels to PyPI for Linux and Windows users
    • Increase test coverage from 92% to 96%
    • Fix the problem where min_count (threshold=hp['THRESHOLD']) parameter was not used in train.py
    Source code(tar.gz)
    Source code(zip)
  • 0.2.6(Jun 11, 2020)

    nagisa 0.2.6 incorporates the following changes:

    • Increase test coverage from 88% to 92%
    • Fix readFile(filename) in mecab_system_eval.py for windows users
    • Add python3.7 to .travis.yml
    • Add a DOI with the data archiving tool Zenodo to README.md
    • Add nagisa-0.2.6-cp36-cp36m-win_amd64.whl and nagisa-0.2.6-cp37-cp37m-win_amd64.whl to PyPI to install nagisa without Build Tools for Windows users #23
    • Add nagisa-0.2.6-*-manylinux1_i686.whl and nagisa-0.2.6-*-manylinux1_x86_64.whl to PyPI to install nagisa for Linux users
    Source code(tar.gz)
    Source code(zip)
  • 0.2.5(Dec 31, 2019)

    nagisa 0.2.5 incorporates the following changes:

    • Fix a white space bug in nagisa.decode. This fix resolves an error that occurs when decoding(nagisa.decode) words - contain whitespace.
    • Add __version__ to __init__.py
    • Add slides link at PyCon JP 2019 to README.md
    Source code(tar.gz)
    Source code(zip)
  • 0.2.4(Aug 5, 2019)

    nagisa 0.2.4 incorporates the following changes:

    • Add the new tutorial to the document (train a model for Japanese NER).
    • Add load_file function to nagisa.utils.
    • Fix 'single_word_list' compiler in nagisa.Tagger and support word segmentation using a regular expression.
    Source code(tar.gz)
    Source code(zip)
  • 0.2.3(May 19, 2019)

    nagisa 0.2.3 incorporates the following changes:

    • FIx #11 . By separating tagging into word segmentation and POS tagging in tagger.py, nagisa.tagging reduces wasteful memory and improves the speed in word segmentation.
    • Fix typo in README.md
    Source code(tar.gz)
    Source code(zip)
  • 0.2.2(May 3, 2019)

    nagisa 0.2.2 incorporates the following changes:

    • Update the document (e.g, add train a model for Japanese Universal Dependencies).
    • Fix log output of nagisa.fit function.
    • Fix issues from Codacy (e.g, delete unused codes in train.py).
    • Add appveyor.yml for Windows users.
    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Jan 15, 2019)

    nagisa 0.2.0 incorporates the following changes:

    • Provide a simple train method for a joint word segmentation and sequence labeling (e.g, POS-tagging, NER) model.
    • Fix ZeroDivisionError in mecab_system_eval.py.
    Source code(tar.gz)
    Source code(zip)
  • 0.1.2(Dec 25, 2018)

    nagisa 0.1.2 incorporates the following changes:

    • Provide the postagging method #8
    • Adopt the longest match to extract a word in nagisa.Tagger(single_word_list)
    Source code(tar.gz)
    Source code(zip)
Get list of common stop words in various languages in Python

Python Stop Words Table of contents Overview Available languages Installation Basic usage Python compatibility Overview Get list of common stop words

Alireza Savand 142 Dec 21, 2022
Using BERT-based models for toxic span detection

SemEval 2021 Task 5: Toxic Spans Detection: Task: Link to SemEval-2021: Task 5 Toxic Span Detection is https://competitions.codalab.org/competitions/2

Ravika Nagpal 1 Jan 04, 2022
Text classification is one of the popular tasks in NLP that allows a program to classify free-text documents based on pre-defined classes.

Deep-Learning-for-Text-Document-Classification Text classification is one of the popular tasks in NLP that allows a program to classify free-text docu

Happy N. Monday 2 Mar 17, 2022
GSoC'2021 | TensorFlow implementation of Wav2Vec2

GSoC'2021 | TensorFlow implementation of Wav2Vec2

Vasudev Gupta 73 Nov 28, 2022
LUKE -- Language Understanding with Knowledge-based Embeddings

LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transf

Studio Ousia 587 Dec 30, 2022
The source code of HeCo

HeCo This repo is for source code of KDD 2021 paper "Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning". Paper Link: htt

Nian Liu 106 Dec 27, 2022
An implementation of the Pay Attention when Required transformer

Pay Attention when Required (PAR) Transformer-XL An implementation of the Pay Attention when Required transformer from the paper: https://arxiv.org/pd

7 Aug 11, 2022
Neural text generators like the GPT models promise a general-purpose means of manipulating texts.

Boolean Prompting for Neural Text Generators Neural text generators like the GPT models promise a general-purpose means of manipulating texts. These m

Jeffrey M. Binder 20 Jan 09, 2023
RoNER is a Named Entity Recognition model based on a pre-trained BERT transformer model trained on RONECv2

RoNER RoNER is a Named Entity Recognition model based on a pre-trained BERT transformer model trained on RONECv2. It is meant to be an easy to use, hi

Stefan Dumitrescu 9 Nov 07, 2022
Product-Review-Summarizer - Created a product review summarizer which clustered thousands of product reviews and summarized them into a maximum of 500 characters, saving precious time of customers and helping them make a wise buying decision.

Product-Review-Summarizer - Created a product review summarizer which clustered thousands of product reviews and summarized them into a maximum of 500 characters, saving precious time of customers an

Parv Bhatt 1 Jan 01, 2022
Repo for Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text Summarization

ESACL: Enhanced Seq2Seq Autoencoder via Contrastive Learning for AbstractiveText Summarization This repo is for our paper "Enhanced Seq2Seq Autoencode

Rachel Zheng 14 Nov 01, 2022
A Python/Pytorch app for easily synthesising human voices

Voice Cloning App A Python/Pytorch app for easily synthesising human voices Documentation Discord Server Video guide Voice Sharing Hub FAQ's System Re

Ben Andrew 840 Jan 04, 2023
Concept Modeling: Topic Modeling on Images and Text

Concept is a technique that leverages CLIP and BERTopic-based techniques to perform Concept Modeling on images.

Maarten Grootendorst 120 Dec 27, 2022
Just Another Telegram Ai Chat Bot Written In Python With Pyrogram.

OkaeriChatBot Just another Telegram AI chat bot written in Python using Pyrogram. Requirements Python 3.7 or higher.

Wahyusaputra 2 Dec 23, 2021
xFormers is a modular and field agnostic library to flexibly generate transformer architectures by interoperable and optimized building blocks.

Description xFormers is a modular and field agnostic library to flexibly generate transformer architectures by interoperable and optimized building bl

Facebook Research 2.3k Jan 08, 2023
Journalism AI – Quotes extraction for modular journalism

Quote extraction for modular journalism (JournalismAI collab 2021)

Journalism AI collab 2021 207 Dec 25, 2022
Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Visualize, analyze, and explore NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BER

Jay Alammar 1.6k Dec 25, 2022
A simple Streamlit App to classify swahili news into different categories.

Swahili News Classifier Streamlit App A simple app to classify swahili news into different categories. Installation Install all streamlit requirements

Davis David 4 May 01, 2022
Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks arXiv link: upcoming To be published in Findings of NA

Allen 16 Nov 12, 2022
TTS is a library for advanced Text-to-Speech generation.

TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. TTS comes with pretra

Mozilla 6.5k Jan 08, 2023