This project shows how to serve an ONNX-optimized image classification model as a web service with FastAPI, Docker, and Kubernetes.

Overview

Deploying ML models with FastAPI, Docker, and Kubernetes

By: Sayak Paul and Chansung Park

This project shows how to serve an ONNX-optimized image classification model as a RESTful web service with FastAPI, Docker, and Kubernetes (k8s). The idea is to first Dockerize the API and then deploy it on a k8s cluster running on Google Kubernetes Engine (GKE). We do this integration using GitHub Actions.

๐Ÿ‘‹ Note: Even though this project uses an image classification its structure and techniques can be used to serve other models as well.

Deploying the model as a service with k8s

  • We decouple the model optimization part from our API code. The optimization part is available within the notebooks/TF_to_ONNX.ipynb notebook.

  • Then we locally test the API. You can find the instructions within the api directory.

  • To deploy the API, we define our deployment.yaml workflow file inside .github/workflows. It does the following tasks:

    • Looks for any changes in the specified directory. If there are any changes:
    • Builds and pushes the latest Docker image to Google Container Register (GCR).
    • Deploys the Docker container on the k8s cluster running on GKE.

Configurations needed beforehand

  • Create a k8s cluster on GKE. Here's a relevant resource.

  • Create a service account key (JSON) file. It's a good practice to only grant it the roles required for the project. For example, for this project, we created a fresh service account and granted it permissions for the following: Storage Admin, GKE Developer, and GCR Developer.

  • Crete a secret named GCP_CREDENTIALS on your GitHub repository and copy paste the contents of the service account key file into the secret.

  • Configure bucket storage related permissions for the service account:

    $ export PROJECT_ID=<PROJECT_ID>
    $ export ACCOUNT=<ACCOUNT>
    
    $ gcloud -q projects add-iam-policy-binding ${PROJECT_ID} \
        --member=serviceAccount:${ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com \
        --role roles/storage.admin
    
    $ gcloud -q projects add-iam-policy-binding ${PROJECT_ID} \
        --member=serviceAccount:${ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com \
        --role roles/storage.objectAdmin
    
    gcloud -q projects add-iam-policy-binding ${PROJECT_ID} \
        --member=serviceAccount:${ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com \
        --role roles/storage.objectCreator
  • If you're on the main branch already then upon a new push, the worflow defined in .github/workflows/deployment.yaml should automatically run. Here's how the final outputs should look like so (run link):

Notes

  • Since we use CPU-based pods within the k8s cluster, we use ONNX optimizations since they are known to provide performance speed-ups for CPU-based environments. If you are using GPU-based pods then look into TensorRT.
  • We use Kustomize to manage the deployment on k8s.

Querying the API endpoint

From workflow outputs, you should see something like so:

NAME             TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
fastapi-server   LoadBalancer   xxxxxxxxxx   xxxxxxxxxx        80:30768/TCP   23m
kubernetes       ClusterIP      xxxxxxxxxx     <none>          443/TCP        160m

Note the EXTERNAL-IP corresponding to fastapi-server (iff you have named your service like so). Then cURL it:

curl -X POST -F [email protected] -F with_resize=True -F with_post_process=True http://{EXTERNAL-IP}:80/predict/image

You should get the following output (if you're using the cat.jpg image present in the api directory):

"{\"Label\": \"tabby\", \"Score\": \"0.538\"}"

The request assumes that you have a file called cat.jpg present in your working directory.

TODO (s)

  • Set up logging for the k8s pods.
  • Find a better way to report the latest API endpoint.

Acknowledgements

ML-GDE program for providing GCP credit support.

Comments
  • Feat/locust grpc

    Feat/locust grpc

    @deep-diver currently, the load test runs into:

    Screenshot 2022-04-02 at 10 54 26 AM

    I have ensured https://github.com/sayakpaul/ml-deployment-k8s-fastapi/blob/feat/locust-grpc/locust/grpc/locustfile.py#L49 returns the correct output. But after a few requests, I run into the above problem.

    Also, I should mention that the gRPC client currently does not take care of image resizing which makes it a bit less comparable to the REST client which handles preprocessing as well postprocessing.

    opened by sayakpaul 18
  • Setup TF Serving based deployment

    Setup TF Serving based deployment

    In this new feature, the following works are expected

    • Update the notebook Create a new notebook with the TF Serving prototype based on both gRPC(Ref) and RestAPI(Ref).

    • Update the notebook Update the newly created notebook to check the %%timeit on the TF Serving server locally.

    • Build/Commit docker image based on TF Serving base image using this method.

    • Deploy the built docker image on GKE cluster

    • Check the deployed model's performance with a various scenarios (maybe the same ones applied to ONNX+FastAPI scenarios)

    new feature 
    opened by deep-diver 11
  • Perform load testing with Locust

    Perform load testing with Locust

    Resources:

    • https://towardsdatascience.com/performance-testing-an-ml-serving-api-with-locust-ecd98ab9b7f7
    • https://microsoft.github.io/PartsUnlimitedMRP/pandp/200.1x-PandP-LocustTest.html
    • https://github.com/https-deeplearning-ai/machine-learning-engineering-for-production-public/tree/main/course4/week2-ungraded-labs/C4_W2_Lab_3_Latency_Test_Compose
    opened by sayakpaul 10
  • 4 dockerize

    4 dockerize

    fix

    • move api/utils/requirements.txt to /api
    • add missing dependency python-multipart to the requirements.txt

    add

    • Dockerfile

    Closes https://github.com/sayakpaul/ml-deployment-k8s-fastapi/issues/4

    opened by deep-diver 4
  • Deployment on GKE with GitHub Actions

    Deployment on GKE with GitHub Actions

    Closes https://github.com/sayakpaul/ml-deployment-k8s-fastapi/issues/5, https://github.com/sayakpaul/ml-deployment-k8s-fastapi/issues/7, and https://github.com/sayakpaul/ml-deployment-k8s-fastapi/issues/6.

    opened by sayakpaul 2
  • chore: refactored the colab notebook.

    chore: refactored the colab notebook.

    Just added a text cell explaining why it's better to include the preprocessing function in the final exported model. Also, added a cell to show if the TF and ONNX outputs match with np.testing.assert_allclose().

    opened by sayakpaul 2
Owner
Sayak Paul
ML Engineer at @carted | One PR at a time
Sayak Paul
Regex Converter for Flask URL Routes

Flask-Reggie Enable Regex Routes within Flask Installation pip install flask-reggie Configuration To enable regex routes within your application from

Rhys Elsmore 48 Mar 07, 2022
CLI and Streamlit applications to create APIs from Excel data files within seconds, using FastAPI

FastAPI-Wrapper CLI & APIness Streamlit App Arvindra Sehmi, Oxford Economics Ltd. | Website | LinkedIn (Updated: 21 April, 2021) fastapi-wrapper is mo

Arvindra 49 Dec 03, 2022
FastAPI CRUD template using Deta Base

Deta Base FastAPI CRUD FastAPI CRUD template using Deta Base Setup Install the requirements for the CRUD: pip3 install -r requirements.txt Add your D

Sebastian Ponce 2 Dec 15, 2021
FastAPI simple cache

FastAPI Cache Implements simple lightweight cache system as dependencies in FastAPI. Installation pip install fastapi-cache Usage example from fastapi

Ivan Sushkov 188 Dec 29, 2022
Adds GraphQL support to your Flask application.

Flask-GraphQL Adds GraphQL support to your Flask application. Usage Just use the GraphQLView view from flask_graphql from flask import Flask from flas

GraphQL Python 1.3k Dec 31, 2022
FastAPI with async for generating QR codes and bolt11 for Lightning Addresses

sendsats An API for getting QR codes and Bolt11 Invoices from Lightning Addresses. Share anywhere; as a link for tips on a twitter profile, or via mes

Bitkarrot 12 Jan 07, 2023
Sample project showing reliable data ingestion application using FastAPI and dramatiq

Create and deploy a reliable data ingestion service with FastAPI, SQLModel and Dramatiq This is the source code for the data ingestion service explain

Franรงois Voron 31 Nov 30, 2022
CURSO PROMETHEUS E GRAFANA: Observability in a real world

Curso de monitoraรงรฃo com o Prometheus Esse curso ensina como usar o Prometheus como uma ferramenta integrada de monitoraรงรฃo, entender seus conceitos,

Rafael Cirolini 318 Dec 23, 2022
โœจ๏ธ๐Ÿ SPARQL endpoint built with RDFLib to serve machine learning models, or any other logic implemented in Python

โœจ SPARQL endpoint for RDFLib rdflib-endpoint is a SPARQL endpoint based on a RDFLib Graph to easily serve machine learning models, or any other logic

Vincent Emonet 27 Dec 19, 2022
์Šคํƒ€ํŠธ์—… ๊ฐœ๋ฐœ์ž ์ฑ„์šฉ

์Šคํƒ€ํŠธ์—… ๊ฐœ๋ฐœ์ž ์ฑ„์šฉ ๅคง ๋ฐ•๋žŒํšŒ Seed ~ Series B์— ์žˆ๋Š” ์Šคํƒ€ํŠธ์—…์„ ์œ„ํ•œ ์ฑ„์šฉ์ •๋ณด ํŽ˜์ด์ง€์ž…๋‹ˆ๋‹ค. Back-end, Frontend, Mobile ๋“ฑ ๊ฐœ๋ฐœ์ž๋ฅผ ๋Œ€์ƒ์œผ๋กœ ์ง„ํ–‰ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ํ•ด๋‹น ์Šคํƒ€ํŠธ์—…์— ์ข…์‚ฌํ•˜์‹œ๋Š” ๋ถ„๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ฑ„์šฉ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์•Œ๊ณ  ๊ณ„์‹œ๋‹ค๋ฉด

JuHyun Lee 58 Dec 14, 2022
FastAPI IPyKernel Sandbox

FastAPI IPyKernel Sandbox This repository is a light-weight FastAPI project that is meant to provide a wrapper around IPyKernel interactions. It is in

Nick Wold 2 Oct 25, 2021
Practice-python is a simple Fast api project for dealing with modern rest api technologies.

Practice Python Practice-python is a simple Fast api project for dealing with modern rest api technologies. Deployment with docker Go to the project r

0 Sep 19, 2022
Opinionated authorization package for FastAPI

FastAPI Authorization Installation pip install fastapi-authorization Usage Currently, there are two models available: RBAC: Role-based Access Control

Marcelo Trylesinski 18 Jul 04, 2022
Simple example of FastAPI + Celery + Triton for benchmarking

You can see the previous work from: https://github.com/Curt-Park/producer-consumer-fastapi-celery https://github.com/Curt-Park/triton-inference-server

Jinwoo Park (Curt) 37 Dec 29, 2022
FastAPI with Docker and Traefik

Dockerizing FastAPI with Postgres, Uvicorn, and Traefik Want to learn how to build this? Check out the post. Want to use this project? Development Bui

51 Jan 06, 2023
LuSyringe is a documentation injection tool for your classes when using Fast API

LuSyringe LuSyringe is a documentation injection tool for your classes when using Fast API Benefits The main benefit is being able to separate your bu

Enzo Ferrari 2 Sep 06, 2021
Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automatically use request headers such as x-request-id or x-correlation-id.

starlette context Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automat

Tomasz Wรณjcik 300 Dec 26, 2022
REST API with FastAPI and JSON file.

FastAPI RESTAPI with a JSON py 3.10 First, to install all dependencies, in ./src/: python -m pip install -r requirements.txt Second, into the ./src/

Luis Quiรฑones Requelme 1 Dec 15, 2021
Socket.IO integration for Flask applications.

Flask-SocketIO Socket.IO integration for Flask applications. Installation You can install this package as usual with pip: pip install flask-socketio

Miguel Grinberg 4.9k Jan 03, 2023
Lightning FastAPI

Lightning FastAPI Lightning FastAPI framework, provides boiler plates for FastAPI based on Django Framework Explaination / | โ”‚ manage.py โ”‚ README.

Rajesh Joshi 1 Oct 15, 2021