API for the GPT-J language model 🦜. Including a FastAPI backend and a streamlit frontend

Overview

gpt-j-api 🦜

GitHub release (latest by date) Python version API up

An API to interact with the GPT-J language model. You can use and test the model in two different ways:

Using the API

  • Python:
import requests
context = "In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English."
payload = {
    "context": context,
    "token_max_length": 512,
    "temperature": 1.0,
    "top_p": 0.9,
}
response = requests.post("http://api.vicgalle.net:5000/generate", params=payload).json()
print(response)
  • Bash:
curl -X 'POST' \
  'http://api.vicgalle.net:5000/generate?context=In%20a%20shocking%20finding%2C%20scientists%20discovered%20a%20herd%20of%20unicorns%20living%20in%20a%20remote%2C%20previously%20unexplored%20valley%2C%20in%20the%20Andes%20Mountains.%20Even%20more%20surprising%20to%20the%20researchers%20was%20the%20fact%20that%20the%20unicorns%20spoke%20perfect%20English.&token_max_length=512&temperature=1&top_p=0.9' \
  -H 'accept: application/json' \
  -d ''

Deployment of the API server

Just ssh into a TPU VM. This code was only tested on the v3-8 variants.

First, install the requirements and get the weigts:

python3 -m pip install -r requirements.txt
wget https://the-eye.eu/public/AI/GPT-J-6B/step_383500_slim.tar.zstd
sudo apt install zstd
tar -I zstd -xf step_383500_slim.tar.zstd

And just run

python3 serve.py

Then, you can go to http://localhost:5000/docs and use the API!

Deploy the streamlit dashboard

Just run

python3 -m streamlit run streamlit_app.py --server.port 8000

Acknowledgements

Thanks to the support of the TPU Research Cloud, https://sites.research.google/trc/

Comments
  • I've made an extensions using this api

    I've made an extensions using this api

    https://chrome.google.com/webstore/detail/type-j/femdhcgkiiagklmickakfoogeehbjnbh

    You can check it out here

    First i was very hyped up and it felt fun, like I was talking to a machine, but then I lost my enthusiasm and now I feel like it's totally useless xD

    I'm just leaving a link here for you to appreciate you, it became real thanks for you posting this api

    feel free to delete the issue as it's out of scope

    if you got ideas on how to make it commercially succesful - i'll be happy to partner up

    peace

    opened by oogxdd 5
  • Illegal Instruction

    Illegal Instruction

    When installing like described in the readme (fresh conda env,python=3.8, ubuntu) I'll get a illegal instruction immediately after running python serve.py

    (gpt-j-api) […]@[…]:/opt/GPT/gpt-j-api$ python -q -X faulthandler serve.py
    Fatal Python error: Illegal instruction
    
    Current thread 0x00007f358d7861c0 (most recent call first):
      File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
      File "<frozen importlib._bootstrap_external>", line 1166 in create_module
      File "<frozen importlib._bootstrap>", line 556 in module_from_spec
      File "<frozen importlib._bootstrap>", line 657 in _load_unlocked
      File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 991 in _find_and_load
      File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
      File "<frozen importlib._bootstrap>", line 1042 in _handle_fromlist
      File "/home/korny/miniconda3/envs/gpt-j-api/lib/python3.8/site-packages/jaxlib/xla_client.py", lin
    e 31 in <module>
      File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 991 in _find_and_load
      File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
      File "<frozen importlib._bootstrap>", line 1042 in _handle_fromlist
      File "/home/korny/miniconda3/envs/gpt-j-api/lib/python3.8/site-packages/jax/lib/__init__.py", line 58 in <module>
      File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
      File "<frozen importlib._bootstrap_external>", line 843 in exec_module
      File "<frozen importlib._bootstrap>", line 671 in _load_unlocked
      File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 991 in _find_and_load
      File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
      File "<frozen importlib._bootstrap>", line 1042 in _handle_fromlist
      File "/home/korny/miniconda3/envs/gpt-j-api/lib/python3.8/site-packages/jax/config.py", line 26 in <module>
      File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
      File "<frozen importlib._bootstrap_external>", line 843 in exec_module
      File "<frozen importlib._bootstrap>", line 671 in _load_unlocked
      File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 991 in _find_and_load
      File "/home/korny/miniconda3/envs/gpt-j-api/lib/python3.8/site-packages/jax/__init__.py", line 33 in <module>
      File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
      File "<frozen importlib._bootstrap_external>", line 843 in exec_module
      File "<frozen importlib._bootstrap>", line 671 in _load_unlocked
      File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 991 in _find_and_load
      File "serve.py", line 3 in <module>
    Illegal instruction (core dumped)
    

    EDIT

    running this on CPU only, I tried installing jax[CPU] - same resut

    opened by chris-aeviator 4
  • api seems offline

    api seems offline

    When I try to access the API I get the following error: ERR_CONNECTION_TIMED_OUT. But when I try to connect to it using a different IP address it does work. Am I IP banned?

    opened by KoenTech 4
  • Usage

    Usage

    I'm using this to host a Discord chatbot, and though I have slowmode on the channel there's still a lot of usage, and often the API is being used as fast as it can generate completions. Will this harm the experience for others? Should I limit it more? (thanks for making this free but I don't want to take advantage of that too much if it's bad for others)

    opened by Heath123 3
  • Alternative to Google TPU VM?

    Alternative to Google TPU VM?

    Hello,

    I would like to run a local instance of GPT-J, but avoid using Google.

    I have little to no experience in machine learning and its requirements, are there other solutions I could use? (What are the requirements for a machine in order to run GPT-J?)

    Thank you very much!

    opened by birkenbaum 3
  • Is there a way to speed up inference?

    Is there a way to speed up inference?

    Hello, I am currently working on a project where I need quick inference. It needn't be real-time, but something around 7-10 sec would be great. Is there a way to speed up the inference using the API?

    The model does not seem to be a problem as compute_time is around 8sec, but by the time the request arrives it takes around 20 seconds (over 30 on some occasions). Is there a way to make the request a bit faster?

    Thanks,

    opened by Aryagm 2
  • Errno 111

    Errno 111

    Could anyone please fix the following error? Thanks a lot.

    "ConnectionError: ...Failed to establish a new connection: [Errno 111] Connection refused"

    opened by Mather10 1
  • How to make the api public?

    How to make the api public?

    Hey, I was able to get serve.py running with the instructions you gave. But now I want to make the api public and connect it to a domain name so it can be publicly accessed (without needing a connection to the vm). How can I achieve this?

    I want to do the same thing you did with "http://api.vicgalle.net:5000/generate" and "http://api.vicgalle.net:5000/docs".

    Thanks,

    opened by Aryagm 1
  • API VM?

    API VM?

    Hi I wanted to host my own version of the api, where is the public one hosted? is it on a google cloud TPU VM? The ones ive seen here https://cloud.google.com/tpu/pricing are very expensive :D Is a TPU VM needed and the model won't be able to run on a normal GPU VM?

    Thanks!

    opened by jryebread 1
  • Raw text...

    Raw text...

    This is probably a very stupid question but whenever I run GPT-J I always get the full output:

    {'model': 'GPT-J-6B', 'compute_time': 1.2492187023162842, 'text': ' \n(and you\'ll be a slave)\n\n**_"I\'m not a robot, I\'m a human being."_**\n\n**_"I\'m not a robot, I\'m a human being."_**\n\n', 'prompt': 'AI will take over the world ', 'token_max_length': 50, 'temperature': 0.09, 'top_p': 0.9, 'stop_sequence': None}

    What parameter do I need to change so it only outputs the generated text?

    (and you'll be a slave) I'm not a robot, I'm a human being. I'm not a robot, I'm a human being.

    opened by Vilagamer999 1
  • Latency with TPU VM

    Latency with TPU VM

    Got things running on Google Clouds, really happy :). Was hoping for a little but of a speed increase, but computation time is the same and latency on the request seems to be the main delay. Did you experiment with firewalls and ports to improve things?

    opened by Ontopic 1
  • Version support for Huggingface GPT-J 6B

    Version support for Huggingface GPT-J 6B

    GPT-J Huggingface and streamlit style like by project-code py

    from transformers import AutoTokenizer, AutoModelForCausalLM

    tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")

    model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")

    opened by ghost 0
Releases(v0.3)
Owner
Víctor Gallego
Data scientist & predoc researcher
Víctor Gallego
Simple program that translates the name of files into English

Simple program that translates the name of files into English. Useful for when editing/inspecting programs that were developed in a foreign language.

0 Dec 22, 2021
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Jan 02, 2023
Tensorflow Implementation of A Generative Flow for Text-to-Speech via Monotonic Alignment Search

Tensorflow Implementation of A Generative Flow for Text-to-Speech via Monotonic Alignment Search

Ankur Dhuriya 10 Oct 13, 2022
A collection of Korean Text Datasets ready to use using Tensorflow-Datasets.

tfds-korean A collection of Korean Text Datasets ready to use using Tensorflow-Datasets. TensorFlow-Datasets를 이용한 한국어/한글 데이터셋 모음입니다. Dataset Catalog |

Jeong Ukjae 20 Jul 11, 2022
Chinese segmentation library

What is loso? loso is a Chinese segmentation system written in Python. It was developed by Victor Lin ( Fang-Pen Lin 82 Jun 28, 2022

This repository collects together basic linguistic processing data for using dataset dumps from the Common Voice project

Common Voice Utils This repository collects together basic linguistic processing data for using dataset dumps from the Common Voice project. It aims t

Francis Tyers 40 Dec 20, 2022
Multiple implementations for abstractive text summurization , using google colab

Text Summarization models if you are able to endorse me on Arxiv, i would be more than glad https://arxiv.org/auth/endorse?x=FRBB89 thanks This repo i

463 Dec 26, 2022
The official repository of the ISBI 2022 KNIGHT Challenge

KNIGHT The official repository holding the data for the ISBI 2022 KNIGHT Challenge About The KNIGHT Challenge asks teams to develop models to classify

Nicholas Heller 4 Jan 22, 2022
DataCLUE: 国内首个以数据为中心的AI测评(含模型分析报告)

DataCLUE 以数据为中心的AI测评(DataCLUE) DataCLUE: A Chinese Data-centric Language Evaluation Benchmark 内容导引 章节 描述 简介 介绍以数据为中心的AI测评(DataCLUE)的背景 任务描述 任务描述 实验结果

CLUE benchmark 135 Dec 22, 2022
The official implementation of VAENAR-TTS, a VAE based non-autoregressive TTS model.

VAENAR-TTS This repo contains code accompanying the paper "VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis". Sa

THUHCSI 138 Oct 28, 2022
texlive expressions for documents

tex2nix Generate Texlive environment containing all dependencies for your document rather than downloading gigabytes of texlive packages. Installation

Jörg Thalheim 70 Dec 26, 2022
运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。

OlittleRer 运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。编程语言和工具包括Java、Python、Matlab、CPLEX、Gurobi、SCIP 等。 关注我们: 运筹小公众号 有问题可以直接在

运小筹 151 Dec 30, 2022
Code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".

This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".

Chenhe Dong 28 Nov 10, 2022
中文問句產生器;使用台達電閱讀理解資料集(DRCD)

Transformer QG on DRCD The inputs of the model refers to we integrate C and A into a new C' in the following form. C' = [c1, c2, ..., [HL], a1, ..., a

Philip 1 Oct 22, 2021
My Implementation for the paper EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks using Tensorflow

Easy Data Augmentation Implementation This repository contains my Implementation for the paper EDA: Easy Data Augmentation Techniques for Boosting Per

Aflah 9 Oct 31, 2022
kochat

Kochat 챗봇 빌더는 성에 안차고, 자신만의 딥러닝 챗봇 애플리케이션을 만드시고 싶으신가요? Kochat을 이용하면 손쉽게 자신만의 딥러닝 챗봇 애플리케이션을 빌드할 수 있습니다. # 1. 데이터셋 객체 생성 dataset = Dataset(ood=True) #

1 Oct 25, 2021
NLP and Text Generation Experiments in TensorFlow 2.x / 1.x

Code has been run on Google Colab, thanks Google for providing computational resources Contents Natural Language Processing(自然语言处理) Text Classificati

1.5k Nov 14, 2022
A high-level Python library for Quantum Natural Language Processing

lambeq About lambeq is a toolkit for quantum natural language processing (QNLP). Documentation: https://cqcl.github.io/lambeq/ Getting started Prerequ

Cambridge Quantum 315 Jan 01, 2023
Count the frequency of letters or words in a text file and show a graph.

Word Counter By EBUS Coding Club Count the frequency of letters or words in a text file and show a graph. Requirements Python 3.9 or higher matplotlib

EBUS Coding Club 0 Apr 09, 2022
The ibet-Prime security token management system for ibet network.

ibet-Prime The ibet-Prime security token management system for ibet network. Features ibet-Prime is an API service that enables the issuance and manag

BOOSTRY 8 Dec 22, 2022