A python wrapper around the ZPar parser for English.

Overview

NOTE

This project is no longer under active development since there are now really nice pure Python parsers such as Stanza and Spacy. The repository will remain here for archival purposes and the PyPI package will continue to be available.

Introduction

CircleCI Build status

python-zpar is a python wrapper around the ZPar parser. ZPar was written by Yue Zhang while he was at Oxford University. According to its home page: ZPar is a statistical natural language parser, which performs syntactic analysis tasks including word segmentation, part-of-speech tagging and parsing. ZPar supports multiple languages and multiple grammar formalisms. ZPar has been most heavily developed for Chinese and English, while it provides generic support for other languages. ZPar is fast, processing above 50 sentences per second using the standard Penn Teebank (Wall Street Journal) data.

I wrote python-zpar since I needed a fast and efficient parser for my NLP work which is primarily done in Python and not C++. I wanted to be able to use this parser directly from Python without having to create a bunch of files and running them through subprocesses. python-zpar not only provides a simply python wrapper but also provides an XML-RPC ZPar server to make batch-processing of large files easier.

python-zpar uses ctypes, a very cool foreign function library bundled with Python that allows calling functions in C DLLs or shared libraries directly.

IMPORTANT: As of now, python-zpar only works with the English zpar models since the interface to the Chinese models is different than the English ones. Pull requests are welcome!

Installation

Currently, python-zpar only works on 64-bit linux and OS X systems. Those are the two platforms I use everyday. I am happy to try to get python-zpar working on other platforms over time. Pull requests are welcome!

In order for python-zpar to work, it requires C functions that can be called directly. Since the only user-exposed entry point in ZPar is the command line client, I needed to write a shared library that would have functions built on top of the ZPar functionality but expose them in a way that ctypes could understand.

Therefore, in order to build python-zpar from scratch, we need to download the ZPar source, patch it with new functionality and compile the shared library. All of this happens automatically when you install with pip:

pip install python-zpar

IF YOU ARE USING macOS

  1. On macOS, the installation will only work with gcc installed using either macports or homebrew. The zpar source cannot be compiled with clang. If you are having trouble compiling the code after cloning the repository or installing the package using pip, you can try to explicitly override the C++ compiler:

    CXX=<path to c++ compiler> make -e

    or

    CXX=<path to c++ compiler> pip install python-zpar

    If you are curious about what the C functions in the shared library module look like, see src/zpar.lib.cpp.

  2. If you are using macOS Mojave, you will need an extra step before running the pip install command above. Starting with Mojave, Apple has stopped installing the C/C++ system header files into /usr/include. As a workaround, they have provided the package /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg that you must install to get the system headers back in the usual place before python-zpar can be compiled. For more details, please read the Command Line Tools section of the Xcode 10 release notes

  3. If you are using macOS Catalina, python-zpar is currently broken. I have not yet upgraded to Catalina on my production machine and cannot figure out a fix yet. If you have a suggested fix, please reply in the issue.

Usage

To use python-zpar, you need the English models for ZPar. They can be downloaded from the ZPar release page here. There are three models: a part-of-speech tagger, a constituency parser, and a dependency parser. For the purpose of the examples below, the models are in the english-models directory in the current directory.

Here's a small example of how to use python-zpar:

from six import print_
from zpar import ZPar

# use the zpar wrapper as a context manager
with ZPar('english-models') as z:

    # get the parser and the dependency parser models
    tagger = z.get_tagger()
    depparser = z.get_depparser()

    # tag a sentence
    tagged_sent = tagger.tag_sentence("I am going to the market.")
    print_(tagged_sent)

    # tag an already tokenized sentence
    tagged_sent = tagger.tag_sentence("Do n't you want to come with me to the market ?", tokenize=False)
    print_(tagged_sent)

    # get the dependency parse of an already tagged sentence
    dep_parsed_sent = depparser.dep_parse_tagged_sentence("I/PRP am/VBP going/VBG to/TO the/DT market/NN ./.")
    print_(dep_parsed_sent)

    # get the dependency parse of an already tokenized sentence
    dep_parsed_sent = depparser.dep_parse_sentence("Do n't you want to come with me to the market ?", tokenize=False)
    print_(dep_parsed_sent)

    # get the dependency parse of an already tokenized sentence
    # and include lemma information (assuming you have NLTK as well
    # as its WordNet corpus installed)
    dep_parsed_sent = depparser.dep_parse_sentence("Do n't you want to come with me to the market ?", tokenize=False, with_lemmas=True)
    print_(dep_parsed_sent)

The above code sample produces the following output:

I/PRP am/VBP going/VBG to/TO the/DT market/NN ./.

Do/VBP n't/RB you/PRP want/VBP to/TO come/VB with/IN me/PRP to/TO the/DT market/NN ?/.

I       PRP   1    SUB
am      VBP   -1   ROOT
going   VBG   1    VC
to      TO    2    VMOD
the     DT    5    NMOD
market  NN    3    PMOD
.       .     1    P

Do      VBP  -1  ROOT
n't     RB   0   VMOD
you     PRP  0   SUB
want    VBP  0   VMOD
to      TO   5   VMOD
come    VB   3   VMOD
with    IN   5   VMOD
me      PRP  6   PMOD
to      TO   5   VMOD
the     DT   10  NMOD
market  NN   8   PMOD
?       .    0   P

Do      VBP  -1  ROOT   do
n't     RB   0   VMOD   n't
you     PRP  0   SUB    you
want    VBP  0   VMOD   want
to      TO   5   VMOD   to
come    VB   3   VMOD   come
with    IN   5   VMOD   with
me      PRP  6   PMOD   me
to      TO   5   VMOD   to
the     DT   10  NMOD   the
market  NN   8   PMOD   market
?       .    0   P      ?

Detailed usage with comments is shown in the included file examples/zpar_example.py. Run python zpar_example.py -h to see a list of all available options.

ZPar Server

The package also provides an python XML-RPC implementation of a ZPar server that makes it easier to process multiple sentences and files by loading the models just once (via the ctypes interface) and allowing clients to connect and request analyses. The implementation is in the executable zpar_server that is installed when you install the package. The server is quite flexible and allows loading only the models that you need. Here's an example of how to start the server with only the tagger and the dependency parser models loaded:

$> zpar_server --modeldir english-models --models tagger parser depparser
INFO:Initializing server ...
Loading tagger from english-models/tagger
Loading model... done.
Loading constituency parser from english-models/conparser
Loading scores... done. (65.9334s)
Loading dependency parser from english-models/depparser
Loading scores... done. (14.9623s)
INFO:Registering introspection ...
INFO:Starting server on port 8859...

Run zpar_server -h to see a list of all options.

Once the server is running, you can connect to it using a client. An example client is included in the file examples/zpar_client.py which can be run as follows (note that if you specified a custom host and port when running the server, you'd need to specify the same here):

$> cd examples
$> python zpar_client.py

INFO:Attempting connection to http://localhost:8859
INFO:Tagging "Don't you want to come with me to the market?"
INFO:Output: Do/VBP n't/RB you/PRP want/VBP to/TO come/VB with/IN me/PRP to/TO the/DT market/NN ?/.
INFO:Tagging "Do n't you want to come to the market with me ?"
INFO:Output: Do/VBP n't/RB you/PRP want/VBP to/TO come/VB to/TO the/DT market/NN with/IN me/PRP ?/.
INFO:Parsing "Don't you want to come with me to the market?"
INFO:Output: (SQ (VBP Do) (RB n't) (NP (PRP you)) (VP (VBP want) (S (VP (TO to) (VP (VB come) (PP (IN with) (NP (PRP me))) (PP (TO to) (NP (DT the) (NN market))))))) (. ?))
INFO:Dep Parsing "Do n't you want to come to the market with me ?"
INFO:Output: Do VBP -1  ROOT
n't RB  0   VMOD
you PRP 0   SUB
want    VBP 0   VMOD
to  TO  5   VMOD
come    VB  3   VMOD
to  TO  5   VMOD
the DT  8   NMOD
market  NN  6   PMOD
with    IN  5   VMOD
me  PRP 9   PMOD
?   .   0   P

INFO:Tagging file /Users/nmadnani/work/python-zpar/examples/test.txt into test.tag
INFO:Parsing file /Users/nmadnani/work/python-zpar/examples/test_tokenized.txt into test.parse

Note that python-zpar and all of the example scripts should work with both Python 2.7 and Python 3.4. I have tested python-zpar on both Linux and Mac but not on Windows.

Node.js version

If you want to use ZPar in your node.js app, check out my other project node-zpar.

License

Although python-zpar is licensed under the MIT license - which means that you can do whatever you want with the wrapper code - ZPar itself is licensed under GPL v3.

ToDo

  1. Improve error handling on both the python and C side.
  2. Expose more functionality, e.g., Chinese word segmentation, parsing etc.
  3. May be look into using CFFI instead of ctypes.
Comments
  • compilation errors during build

    compilation errors during build

    I downloaded zpar wrapper and ran ‘make’ in order to build zpar and zpar wrapper. But, I got the following error:

    In file included from ./src/include/hash.h:25:
    ./src/include/hash_stream.h:18:11: error: call to function 'operator>>' that is neither
          visible in the template definition nor found by argument-dependent lookup
          iss >> table[key] ;
              ^
    ./src/common/tagger/implementations/collins/tagger.h:118:9: note: in instantiation of
          function template specialization 'operator>><CWord, english::CTag>' requested here
          i >> (*m_TopTags);
            ^
    ./src/english/tags.h:29:23: note: 'operator>>' should be declared prior to the call site
          or in namespace 'english'
    inline std::istream & operator >> (std::istream &is, english::CTag &tag) {
                          ^
    1 error generated.
    make[1]: *** [obj/english.postagger.o] Error 1
    make: *** [python-zpar] Error 2
    

    Can you advise me how to resolve the error?

    opened by cml54 14
  • Installing on MAC OS X

    Installing on MAC OS X

    I’m using MAC OSX and the command:

    CXX=/usr/bin/gcc make –e

    Doesn’t work when I’m in the unzipped directory? It seems like it fails on the wget command for the underlying zpar from github. Actual output:

    make: wget: No such file or directory

    **Actually just solved this part.

    Still results in this error eventually though:

    error: call to function 'operator>>' that is neither visible in the template definition nor found by argument-dependent lookup

    Same one that I get in the individual zpar directory when trying to install it independently.

    So I downloaded the individual zpar, and tried to install that separately but that one leads to errors that I believe are related to clang. Using the same CXX command within that file also didn’t work.

    opened by atishsawant 9
  • Make this a real Python package

    Make this a real Python package

    Obviously what we've got right now is a great step in the right direction, but I think in order to see wider-spread adoption, we should really have a zpar Python module that does a lot of the boilerplate in the README and zpar_example.py for the user.

    It'd be really nice if someone could just run:

    import zpar
    
    tagger = zpar.Tagger("english-models")
    parser = zpar.Parser("english-models")
    
    tagger.tag_sentence("Here's a sentence.")
    parser.parse_sentence("Here's a sentence.")
    

    instead of requiring the user to do all the ctypes machinations in zpar_example.py.

    We should also make a setup.py file so that people could run pip install zpar and have it do all the compilation stuff automatically.

    enhancement 
    opened by dan-blanchard 8
  • Feed pre-POS-tagged input to the parser

    Feed pre-POS-tagged input to the parser

    Greetings! :smile: One thing that would be amazing would be the ability feed the parser pre-POS-tagged input, in whatever format of your or the original zpar author's choosing, and have the parser generate the syntactic parse based on that input.

    Thanks! :smile:

    enhancement 
    opened by dmnapolitano 6
  • Throw

    Throw

    If anything goes wrong in zpar, it throws an error message expecting it to be caught by the top-level application. These need to be caught before returning to python, or the Python interpreter will crash.

    opened by rmalouf 5
  • Adding lemmas to dependency parses

    Adding lemmas to dependency parses

    • Dependency parses can now contain lemmas in the last column, if NLTK as well as the WordNet corpus for NLTK are both installed. This is done by passing with_lemmas=True to the dep_parse_sentence() method of a dependency parser object. If either NLTK or the WordNet corpus is not installed, then passing with_lemmas=True will print a warning and produce the regular dependency tree without lemmas.
    • There are also new unit tests for dependency parsers testing the lemma functionality.
    • This PR also contains some other changes pertaining to making the CircleCI builds more efficient and working around the 4GB RAM limit they have on their containers.

    @aoifecahill can you please test this out since you are going to be one of the main consumers for this? :)

    opened by desilinguist 4
  • 2.7 support?

    2.7 support?

    Hello, the README.md doesn't mention which version of Python is required; however, the following

    >>> with ZPar('.../zpar/models/english') as z:
    ...     parser = z.get_parser()
    ...     print(parser.parse_sentence("Do n't you want to come with me to the market ?", tokenize=False))
    

    works as expected in 3.3, but with 2.7: *** glibc detected *** .../python: free(): invalid pointer: 0x00007fed27db7810 *** followed by a huge backtrace.

    If this is to be expected, could you put something in README.md that says that Python 3 is required? Thanks. :smile:

    opened by dmnapolitano 4
  • Universal Dependencies and Stanford Dependencies

    Universal Dependencies and Stanford Dependencies

    How can I change the default depparser using universal dependencies or Stanford dependencies? The default tagsets is "ROOT AMOD DEP NMOD OBJ P PMOD PRD SBAR SUB VC VMOD". I can't find any description for them anymore and can't use them in my project.

    opened by xushenkun 3
  • Logging setup

    Logging setup

    Currently, we modifying the config for the root logger in Tagger.py etc. using logging.basicConfig. This is not a good idea.

    Actually, it looks like we aren't really using logging there in any meaningful way, so may be we can just get rid of logging from those files altogether?

    opened by desilinguist 3
  • add support to parse pre-tokenized text?

    add support to parse pre-tokenized text?

    It would be nice to have the option to specify whether the input text is tokenized or not and have the parser respect that. The default behaviour seems to be to assume untokenized text (at least for ``).

    enhancement 
    opened by aoifecahill 3
  • install failure on Linux server

    install failure on Linux server

    pip install python-zpar Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting python-zpar Using cached https://pypi.tuna.tsinghua.edu.cn/packages/73/80/6961436556d7720239234a41e564cd30eed632f0f3a39ca8d82f288fb858/python-zpar-0.9.5.tar.gz (18 kB) Preparing metadata (setup.py) ... done Building wheels for collected packages: python-zpar Building wheel for python-zpar (setup.py) ... error error: subprocess-exited-with-error

    × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [6 lines of output] running bdist_wheel running build running build_zpar compiling zpar library ******************************************************************************** error: [Errno 2] No such file or directory: 'make' [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for python-zpar Running setup.py clean for python-zpar Failed to build python-zpar Installing collected packages: python-zpar Running setup.py install for python-zpar ... error error: subprocess-exited-with-error

    × Running setup.py install for python-zpar did not run successfully. │ exit code: 1 ╰─> [8 lines of output] running install /opt/conda/envs/rstenv/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_zpar compiling zpar library ******************************************************************************** error: [Errno 2] No such file or directory: 'make' [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure

    × Encountered error while trying to install package. ╰─> python-zpar

    note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.

    opened by Anker-Lee 2
  • install failure (Failed building wheel for python-zpar) on macOS Catalina

    install failure (Failed building wheel for python-zpar) on macOS Catalina

    when installing python-zpar by using pip install python-zpar it gives

    wget -N https://github.com/frcchang/zpar/archive/v0.7.5.tar.gz -O /tmp/zpar.tar.gz
      make: wget: No such file or directory
      make: *** [/tmp/zpar.tar.gz] Error 1
    
       Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/private/var/folders/gv/z_6yynkd2sjchc5710zh5sjm0000gn/T/pip-install-yw_mm28r/python-zpar/setup.py", line 111, in <module>
            ['zpar_server = zpar.zpar_server:main']}
          File "/anaconda3/lib/python3.6/site-packages/setuptools/__init__.py", line 129, in setup
            return distutils.core.setup(**attrs)
          File "/anaconda3/lib/python3.6/distutils/core.py", line 148, in setup
            dist.run_commands()
          File "/anaconda3/lib/python3.6/distutils/dist.py", line 955, in run_commands
            self.run_command(cmd)
          File "/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
            cmd_obj.run()
          File "/private/var/folders/gv/z_6yynkd2sjchc5710zh5sjm0000gn/T/pip-install-yw_mm28r/python-zpar/setup.py", line 70, in run
            install.run(self)
          File "/anaconda3/lib/python3.6/site-packages/setuptools/command/install.py", line 61, in run
            return orig.install.run(self)
          File "/anaconda3/lib/python3.6/distutils/command/install.py", line 545, in run
            self.run_command('build')
          File "/anaconda3/lib/python3.6/distutils/cmd.py", line 313, in run_command
            self.distribution.run_command(command)
          File "/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
            cmd_obj.run()
          File "/private/var/folders/gv/z_6yynkd2sjchc5710zh5sjm0000gn/T/pip-install-yw_mm28r/python-zpar/setup.py", line 50, in run
            self.execute(compile, [], 'compiling zpar library')
          File "/anaconda3/lib/python3.6/distutils/cmd.py", line 335, in execute
            util.execute(func, args, msg, dry_run=self.dry_run)
          File "/anaconda3/lib/python3.6/distutils/util.py", line 301, in execute
            func(*args)
          File "/private/var/folders/gv/z_6yynkd2sjchc5710zh5sjm0000gn/T/pip-install-yw_mm28r/python-zpar/setup.py", line 48, in compile
            raise RuntimeError('ZPar shared library compilation failed')
        RuntimeError: ZPar shared library compilation failed
        
    
    

    I have already changed my c++ and c compiler to gcc

    pengqiweideMacBook-Pro:~ pengqiwei$ gcc --version
    gcc-8 (Homebrew GCC 8.2.0) 8.2.0
    Copyright (C) 2018 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    

    I am not sure where went wrong.....

    opened by Punchwes 23
  • Update code to integrate Chinese parsers/taggers

    Update code to integrate Chinese parsers/taggers

    I use zpar as a dependency parsing, but I found that python-zpar can't load chinese model successfully. And the error is like “Loading tagger from ../chinese-models/tagger Loading model...terminate called after throwing an instance of 'std::string' Aborted”

    My code is as: from six import print_ from zpar import ZPar chinese_model = "../chinese-models" with ZPar(chinese_model) as z: depparser = z.get_depparser()

    I download the chinese-models.zip from github archive

    I also try the english-models, and python-zpar load english model successfully

    Thanks

    help wanted 
    opened by buptdjd 2
Releases(0.9.5)
  • 0.9.5(Jul 16, 2015)

    • Accompanying release for ZPar v0.7.5 which is a big bugfix release.
    • Fixed segfaults when using python-zpar interactively.
    • Removed hacky fix for single word sentences introduced in v0.9.2 since the underlying bug has been fixed in ZPar.
    • Previously we were programmatically redirecting STDOUT to STDERR because ZPar used to print informational messages to STDOUT. However, this has been fixed in the new release of ZPar. This redirection is no longer necessary and has been removed.
    Source code(tar.gz)
    Source code(zip)
  • 0.9.3(May 29, 2015)

  • 0.9.2(May 28, 2015)

    The latest version of ZPar has a bug where it produces non-deterministic output for sentences that contain a single word in all caps. This hack title-cases such words to make the output deterministic and then restores the original word. This hack will be removed once the underlying bug in ZPar is fixed which is under progress.

    Source code(tar.gz)
    Source code(zip)
  • 0.9.1(Dec 12, 2014)

  • 0.9.0(Dec 11, 2014)

    • This release adds functions called [dep_]parse_tagged_sent() and [dep_]parse_tagged_file() that allow the user to obtain constituency and dependency parses for already tagged sentences and files.
    • It also adds simple unit tests for all the major functions.
    Source code(tar.gz)
    Source code(zip)
Owner
ETS
Educational Testing Service
ETS
Code for Findings of ACL 2022 Paper "Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors"

SWRM Code for Findings of ACL 2022 Paper "Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors" Clone Clone th

14 Jan 03, 2023
Ask for weather information like a human

weather-nlp About Ask for weather information like a human. Goals Understand typical questions like: Hourly temperatures in Potsdam on 2020-09-15. Rai

5 Oct 29, 2022
Implementing SimCSE(paper, official repository) using TensorFlow 2 and KR-BERT.

KR-BERT-SimCSE Implementing SimCSE(paper, official repository) using TensorFlow 2 and KR-BERT. Training Unsupervised python train_unsupervised.py --mi

Jeong Ukjae 27 Dec 12, 2022
Voice Assistant inspired by Google Assistant, Cortana, Alexa, Siri, ...

author: @shival_gupta VoiceAI This program is an example of a simple virtual assitant It will listen to you and do accordingly It will begin with wish

Shival Gupta 1 Jan 06, 2022
Textpipe: clean and extract metadata from text

textpipe: clean and extract metadata from text textpipe is a Python package for converting raw text in to clean, readable text and extracting metadata

Textpipe 298 Nov 21, 2022
This is a general repo that helps you develop fast/effective NLP classifiers using Huggingface

NLP Classifier Introduction This project trains a bert model on any NLP classifcation model. And uses the model in make predictions on new data using

Abdullah Tarek 3 Mar 11, 2022
Official codebase for Can Wikipedia Help Offline Reinforcement Learning?

Official codebase for Can Wikipedia Help Offline Reinforcement Learning?

Machel Reid 82 Dec 19, 2022
Stack based programming language that compiles to x86_64 assembly or can alternatively be interpreted in Python

lang lang is a simple stack based programming language written in Python. It can

Christoffer Aakre 1 May 30, 2022
Sequence modeling benchmarks and temporal convolutional networks

Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) This repository contains the experiments done in the work An Empirical Evaluati

CMU Locus Lab 3.5k Jan 03, 2023
This is the offline-training-pipeline for our project.

offline-training-pipeline This is the offline-training-pipeline for our project. We adopt the offline training and online prediction Machine Learning

0 Apr 22, 2022
Converts python code into c++ by using OpenAI CODEX.

🦾 codex_py2cpp 🤖 OpenAI Codex Python to C++ Code Generator Your Python Code is too slow? 🐌 You want to speed it up but forgot how to code in C++? ⌨

Alexander 423 Jan 01, 2023
Let Xiao Ai speakers control third-party devices

A stupid way to extend miot/xiaoai. Demo for Panasonic Bath Bully FV-RB20VL1 逆向 Panasonic Smart China,获得控制浴霸的请求信息(HTTP 请求),详见 apps/panasonic.py; 2. 通过

bin 14 Jul 07, 2022
Creating an LSTM model to generate music

Music-Generation Creating an LSTM model to generate music music-generator Used to create basic sin wave sounds music-ai Contains the functions to conv

Jerin Joseph 2 Dec 02, 2021
Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms

FNet: Mixing Tokens with Fourier Transforms Pytorch implementation of Fnet : Mixing Tokens with Fourier Transforms. Citation: @misc{leethorp2021fnet,

Rishikesh (ऋषिकेश) 217 Dec 05, 2022
This code extends the neural style transfer image processing technique to video by generating smooth transitions between several reference style images

Neural Style Transfer Transition Video Processing By Brycen Westgarth and Tristan Jogminas Description This code extends the neural style transfer ima

Brycen Westgarth 110 Jan 07, 2023
TweebankNLP - Pre-trained Tweet NLP Pipeline (NER, tokenization, lemmatization, POS tagging, dependency parsing) + Models + Tweebank-NER

TweebankNLP This repo contains the new Tweebank-NER dataset and off-the-shelf Twitter-Stanza pipeline for state-of-the-art Tweet NLP, as described in

Laboratory for Social Machines 84 Dec 20, 2022
NLP and Text Generation Experiments in TensorFlow 2.x / 1.x

Code has been run on Google Colab, thanks Google for providing computational resources Contents Natural Language Processing(自然语言处理) Text Classificati

1.5k Nov 14, 2022
A framework for implementing federated learning

This is partly the reproduction of the paper of [Privacy-Preserving Federated Learning in Fog Computing](DOI: 10.1109/JIOT.2020.2987958. 2020)

DavidChen 46 Sep 23, 2022
Chinese real time voice cloning (VC) and Chinese text to speech (TTS).

Chinese real time voice cloning (VC) and Chinese text to speech (TTS). 好用的中文语音克隆兼中文语音合成系统,包含语音编码器、语音合成器、声码器和可视化模块。

Kuang Dada 6 Nov 08, 2022