REST API for sentence tokenization and embedding using Multilingual Universal Sentence Encoder.

Overview

What is MUSE?

MUSE stands for Multilingual Universal Sentence Encoder - multilingual extension (16 languages) of Universal Sentence Encoder (USE).
MUSE/USE models encode sentences into embedding vectors of fixed size.

MUSE paper: link.
USE paper: link.
USE Visually Explainer article: link.

What is MUSE as Service?

MUSE as Service is REST API for sentence tokenization and embedding using MUSE.
It is written on flask + gunicorn.
You can configure gunicorn with gunicorn.conf.py file.

Installation

# clone repo
git clone https://github.com/dayyass/muse_as_service.git

# install dependencies
cd muse_as_service
pip install -r requirements.txt

Run Service

To launch a service use a docker container (either locally or on a server):

docker build -t muse_as_service .
docker run -d -p 5000:5000 --name muse_as_service muse_as_service

NOTE: you can launch a service without docker using gunicorn: sh ./gunicorn.sh, or flask: python app.py, but it is preferable to launch the service inside the docker container.
NOTE: instead of building a docker image, you can pull it from Docker Hub:
docker pull dayyass/muse_as_service

Usage

After you launch the service, you can tokenize and embed any {sentence} using GET requests ({ip} is the address where the service was launched):

http://{ip}:5000/tokenize?sentence={sentence}
http://{ip}:5000/embed?sentence={sentence}

You can use python requests library to work with GET requests (example notebook):

import numpy as np
import requests

ip = "localhost"
port = 5000

sentence = "This is sentence example."

# tokenizer
response = requests.get(
    url=f"http://{ip}:{port}/tokenize",
    params={"sentence": f"{sentence}"},
)
tokenized_sentence = response.json()["content"]

# embedder
response = requests.get(
    url=f"http://{ip}:{port}/embed",
    params={"sentence": f"{sentence}"},
)
embedding = np.array(response.json()["content"][0])

# results
print(tokenized_sentence)  # ['▁This', '▁is', '▁sentence', '▁example', '.']
print(embedding.shape)  # (512,)

But it is better to use the built-in client MUSEClient for sentence tokenization and embedding, that wraps the functionality of the requests library and provides the user with a simpler interface (example notebook):

from muse_as_service import MUSEClient

ip = "localhost"
port = 5000

sentence = "This is sentence example."

# init client
client = MUSEClient(
    ip=ip,
    port=port,
)

# tokenizer
tokenized_sentence = client.tokenize(sentence)

# embedder
embedding = client.embed(sentence)

# results
print(tokenized_sentence)  # ['▁This', '▁is', '▁sentence', '▁example', '.']
print(embedding.shape)  # (512,)

Citation

If you use muse_as_service in a scientific publication, we would appreciate references to the following BibTex entry:

@misc{dayyass_muse_as_service,
    author = {El-Ayyass, Dani},
    title = {Multilingual Universal Sentence Encoder REST API},
    howpublished = {\url{https://github.com/dayyass/muse_as_service}},
    year = {2021},
}
You might also like...
Multilingual Emotion classification using BERT (fine-tuning). Published at the WASSA workshop (ACL2022).

XLM-EMO: Multilingual Emotion Prediction in Social Media Text Abstract Detecting emotion in text allows social and computational scientists to study h

Transformer-based Text Auto-encoder (T-TA) using TensorFlow 2.

T-TA (Transformer-based Text Auto-encoder) This repository contains codes for Transformer-based Text Auto-encoder (T-TA, paper: Fast and Accurate Deep

Some embedding layer implementation using ivy library
Some embedding layer implementation using ivy library

ivy-manual-embeddings Some embedding layer implementation using ivy library. Just for fun. It is based on NYCTaxiFare dataset from kaggle (cut down to

A Multilingual Latent Dirichlet Allocation (LDA) Pipeline with Stop Words Removal, n-gram features, and Inverse Stemming, in Python.

Multilingual Latent Dirichlet Allocation (LDA) Pipeline This project is for text clustering using the Latent Dirichlet Allocation (LDA) algorithm. It

Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing

Trankit: A Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing Trankit is a light-weight Transformer-based Pyth

Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

Comments
  • How to change batch size

    How to change batch size

    I got the following OOM message: Error on request: Traceback (most recent call last): File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\werkzeug\serving.py", line 324, in run_wsgi execute(self.server.app) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\werkzeug\serving.py", line 313, in execute application_iter = app(environ, start_response) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2091, in call return self.wsgi_app(environ, start_response) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2076, in wsgi_app response = self.handle_exception(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 271, in error_router return original_handler(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2073, in wsgi_app response = self.full_dispatch_request() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 271, in error_router return original_handler(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 467, in wrapper resp = resource(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\views.py", line 84, in view return current_app.ensure_sync(self.dispatch_request)(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 582, in dispatch_request resp = meth(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_jwt_extended\view_decorators.py", line 127, in decorator return current_app.ensure_sync(fn)(*args, **kwargs) File "F:\repos3\muse-as-service\muse-as-service\src\muse_as_service\endpoints.py", line 56, in get embedding = self.embedder(args["sentence"]).numpy().tolist() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\keras\engine\base_layer.py", line 1037, in call outputs = call_fn(inputs, *args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow_hub\keras_layer.py", line 229, in call result = f() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\saved_model\load.py", line 664, in _call_attribute return instance.call(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\def_function.py", line 885, in call result = self._call(*args, **kwds) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\def_function.py", line 957, in _call filtered_flat_args, self._concrete_stateful_fn.captured_inputs) # pylint: disable=protected-access File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\function.py", line 1964, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\function.py", line 596, in call ctx=ctx) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute inputs, attrs, num_outputs) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[32851,782,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/SparseTransformerEncode/Layer_0/SelfAttention/SparseMultiheadAttention/ComputeQKV/ScatterNd}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/layer_prepostprocess/layer_norm/add_1/_128]]
    

    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

    (1) Resource exhausted: OOM when allocating tensor with shape[32851,782,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/SparseTransformerEncode/Layer_0/SelfAttention/SparseMultiheadAttention/ComputeQKV/ScatterNd}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

    question 
    opened by jiangweiatgithub 3
  • slow response from service

    slow response from service

    I have been comparing the efficency between the muse as service and the original "hub.load" method, and see a noticeable slow reponse in the former, both running separately on my Quadro RTX 5000. Can I safely assume this slowness is due to the very nature of the web service? If so, is there any way to improve it?

    invalid 
    opened by jiangweiatgithub 1
Releases(v1.1.2)
Owner
Dani El-Ayyass
Senior NLP Engineer @ Sber AI, Master Student in Applied Mathematics and Computer Science @ CMC MSU
Dani El-Ayyass
A fast, efficient universal vector embedding utility package.

Magnitude: a fast, simple vector embedding utility library A feature-packed Python package and vector storage file format for utilizing vector embeddi

Plasticity 1.5k Jan 02, 2023
Package for controllable summarization

summarizers summarizers is package for controllable summarization based CTRLsum. currently, we only supports English. It doesn't work in other languag

Hyunwoong Ko 72 Dec 07, 2022
Which Apple Keeps Which Doctor Away? Colorful Word Representations with Visual Oracles

Which Apple Keeps Which Doctor Away? Colorful Word Representations with Visual Oracles (TASLP 2022)

Zhuosheng Zhang 3 Apr 14, 2022
[KBS] Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks

#Sentic GCN Introduction This repository was used in our paper: Aspect-Based Sentiment Analysis via Affective Knowledge Enhanced Graph Convolutional N

Akuchi 35 Nov 16, 2022
Conversational text Analysis using various NLP techniques

Conversational text Analysis using various NLP techniques

Rita Anjana 159 Jan 06, 2023
Repository for the paper "Optimal Subarchitecture Extraction for BERT"

Bort Companion code for the paper "Optimal Subarchitecture Extraction for BERT." Bort is an optimal subset of architectural parameters for the BERT ar

Alexa 461 Nov 21, 2022
A simple implementation of N-gram language model.

About A simple implementation of N-gram language model. Requirements numpy Data preparation Corpus Training data for the N-gram model, a text file lik

4 Nov 24, 2021
A raytrace framework using taichi language

ti-raytrace The code use Taichi programming language Current implement acceleration lvbh disney brdf How to run First config your anaconda workspace,

蕉太狼 73 Dec 11, 2022
Meta learning algorithms to train cross-lingual NLI (multi-task) models

Meta learning algorithms to train cross-lingual NLI (multi-task) models

M.Hassan Mojab 4 Nov 20, 2022
Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization (ACL 2021)

Structured Super Lottery Tickets in BERT This repo contains our codes for the paper "Super Tickets in Pre-Trained Language Models: From Model Compress

Chen Liang 16 Dec 11, 2022
Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation

Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision Training Efficiency We show the training efficiency of our DSLP model b

Chenyang Huang 37 Jan 04, 2023
SimBERT升级版(SimBERTv2)!

RoFormer-Sim RoFormer-Sim,又称SimBERTv2,是我们之前发布的SimBERT模型的升级版。 介绍 https://kexue.fm/archives/8454 训练 tensorflow 1.14 + keras 2.3.1 + bert4keras 0.10.6 下载

317 Dec 23, 2022
Train 🤗transformers with DeepSpeed: ZeRO-2, ZeRO-3

Fork from https://github.com/huggingface/transformers/tree/86d5fb0b360e68de46d40265e7c707fe68c8015b/examples/pytorch/language-modeling at 2021.05.17.

Junbum Lee 12 Oct 26, 2022
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and GPT-NEO (2.7 B) on a single 16 GB VRAM V100 Google Cloud instance with Huggingface Transformers using DeepSpeed

Guide: Finetune GPT2-XL (1.5 Billion Parameters) and GPT-NEO (2.7 Billion Parameters) on a single 16 GB VRAM V100 Google Cloud instance with Huggingfa

289 Jan 06, 2023
Code for CodeT5: a new code-aware pre-trained encoder-decoder model.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation This is the official PyTorch implementation

Salesforce 564 Jan 08, 2023
[Preprint] Escaping the Big Data Paradigm with Compact Transformers, 2021

Compact Transformers Preprint Link: Escaping the Big Data Paradigm with Compact Transformers By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Ab

SHI Lab 367 Dec 31, 2022
code for modular summarization work published in ACL2021 by Krishna et al

This repository contains the code for running modular summarization pipelines as described in the publication Krishna K, Khosla K, Bigham J, Lipton ZC

Approximately Correct Machine Intelligence (ACMI) Lab 21 Nov 24, 2022
Codes to pre-train Japanese T5 models

t5-japanese Codes to pre-train a T5 (Text-to-Text Transfer Transformer) model pre-trained on Japanese web texts. The model is available at https://hug

Megagon Labs 37 Dec 25, 2022
Use the power of GPT3 to execute any function inside your programs just by giving some doctests

gptrun Don't feel like coding today? Use the power of GPT3 to execute any function inside your programs just by giving some doctests. How is this diff

Roberto Abdelkader Martínez Pérez 11 Nov 11, 2022
leaking paid token generator that was a shit lmao for 100$ haha

Discord-Token-Generator-Leaked leaking paid token generator that was a shit lmao for 100$ he selling it for 100$ wth here the code enjoy don't forget

Keevo 5 Apr 15, 2022