This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit

Overview

BMW Semantic Segmentation GPU/CPU Inference API

This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit.

The training GUI (also based on the Gluoncv CV toolkit ) for the Semantic Segmentation workflow will be published soon.

A sample inference model is provided with this repository for testing purposes.

This repository can be deployed using docker.

Note: To be able to use the sample inference model provided with this repository make sure to use git clone and avoid downloading the repository as ZIP because it will not download the actual model stored on git lfs but just the pointer instead

api

Prerequisites

  • Ubuntu 18.04 or 20.04 LTS
  • Windows 10 pro with hyper-v enabled and docker desktop
  • NVIDIA Drivers (410.x or higher)
  • Docker CE latest stable release
  • NVIDIA Docker 2
  • Git lfs (large file storage) : installation

Note: the windows deployment supports only CPU version thus nvidia driver and nvidia docker are not required

Check for prerequisites

To check if you have docker-ce installed:

docker --version

To check if you have nvidia-docker2 installed:

dpkg -l | grep nvidia-docker2

nvidia-docker2

To check your nvidia drivers version, open your terminal and type the command nvidia-smi

nvidia-smi

Install prerequisites

Use the following command to install docker on Ubuntu:

chmod +x install_prerequisites.sh && source install_prerequisites.sh

Install NVIDIA Drivers (410.x or higher) and NVIDIA Docker for GPU by following the official docs

Build The Docker Image

To build the docker environment, run the following command in the project's directory:

  • For GPU Build:
docker build -t gluoncv_segmentation_inference_api_gpu -f ./GPU/dockerfile .
  • For CPU Build:
docker build -t gluoncv_segmentation_inference_api_cpu -f ./CPU/dockerfile .

Behind a proxy

  • For GPU Build:
docker build --build-arg http_proxy='' --build-arg https_proxy='' -t gluoncv_segmentation_inference_api_gpu -f ./GPU/dockerfile .
  • For CPU Build:
docker build --build-arg http_proxy='' --build-arg https_proxy='' -t gluoncv_segmentation_inference_api_cpu -f ./CPU/dockerfile .

Run the docker container

To run the inference API go the to the API's directory and run the following:

Using Linux based docker:

  • For GPU:
docker run --gpus '"device=<- gpu numbers seperated by commas ex:"0,1,2" ->"' -itv $(pwd)/models:/models -p <port-of-your-choice>:4343 gluoncv_segmentation_inference_api_gpu
  • For CPU:
docker run -itv $(pwd)/models:/models -p <port-of-your-choice>:4343 gluoncv_segmentation_inference_api_cpu
  • For Windows
docker run -itv ${PWD}/models:/models -p <port-of-your-choice>:4343 gluoncv_segmentation_inference_api_cpu

API Endpoints

To see all available endpoints, open your favorite browser and navigate to:

http://<machine_URL>:<Docker_host_port>/docs

The 'predict_batch' endpoint is not shown on swagger. The list of files input is not yet supported.

Endpoints summary

/load (GET)

Loads all available models and returns every model with it's hashed value. Loaded models are stored and aren't loaded again

/detect (POST)

Performs inference on specified model, image, and returns json file

/get_labels (POST)

Returns all of the specified model labels with their hashed values

/models (GET)

Lists all available models

/models/{model_name}/load (GET)

Loads the specified model. Loaded models are stored and aren't loaded again

/models/{model_name}/predict (POST)

Performs inference on specified model, image, and returns json file (exactly like detect)

/models/{model_name}/predict_image (POST)

Performs inference on specified model, image, and returns the image with transparent segments on it.

/models/{model_name}/inference (POST)

Performs inference on specified model,image, and returns the segments only (image)

inference

/models/{model_name}/labels (GET)

Returns all of the specified model labels

/models/{model_name}/config (GET)

Returns the specified model's configuration

Model structure

The folder "models" contains sub-folders of all the models to be loaded.

You can copy your model sub-folder generated after training ( training GUI will be published soon ) , put it inside the "models" folder in your inference repos and you're all set to infer.

The model sub-folder should contain the following :

  • model_best.params

  • palette.txt If you don't have your own palette, you can generate a random one using the command below in your project's repository and copy palette.txt to your model directory:

python3 generate_random_palette.py
  • configuration.json

The configuration.json file should look like the following :

{
    "inference_engine_name" : "gluonsegmentation",
    "backbone": "resnet101",
    "batch-size": 4,
    "checkname": "bmwtest",
    "classes": 3,
    "classesname": [
        "background",
        "pad",
        "circle"
    ],
    "network": "fcn",
    "type":"segmentation",
    "epochs": 10,
    "lr": 0.001,
    "momentum": 0.9,
    "num_workers": 4,
    "weight-decay": 0.0001
}

Acknowledgements

  • Roy Anwar,Beirut, Lebanon
  • Hadi Koubeissy, inmind.ai, Beirut, Lebanon
Owner
BMW TechOffice MUNICH
This organization contains software for realtime computer vision published by the members, partners and friends of the BMW TechOffice MUNICH and InnovationLab.
BMW TechOffice MUNICH
Towards the D-Optimal Online Experiment Design for Recommender Selection (KDD 2021)

Towards the D-Optimal Online Experiment Design for Recommender Selection (KDD 2021) Contact 0 Jan 11, 2022

Code and training data for our ECCV 2016 paper on Unsupervised Learning

Shuffle and Learn (Shuffle Tuple) Created by Ishan Misra Based on the ECCV 2016 Paper - "Shuffle and Learn: Unsupervised Learning using Temporal Order

Ishan Misra 44 Dec 08, 2021
Python-based Informatics Kit for Analysing Chemical Units

INSTALLATION Python-based Informatics Kit for the Analysis of Chemical Units Step 1: Make a conda environment: conda create -n pikachu python=3.9 cond

47 Dec 23, 2022
Source code of generalized shuffled linear regression

Generalized-Shuffled-Linear-Regression Code for the ICCV 2021 paper: Generalized Shuffled Linear Regression. Authors: Feiran Li, Kent Fujiwara, Fumio

FEI 7 Oct 26, 2022
A Python parser that takes the content of a text file and then reads it into variables.

Text-File-Parser A Python parser that takes the content of a text file and then reads into variables. Input.text File 1. What is your ***? 1. 18 -

Kelvin 0 Jul 26, 2021
Text2Art is an AI art generator powered with VQGAN + CLIP and CLIPDrawer models

Text2Art is an AI art generator powered with VQGAN + CLIP and CLIPDrawer models. You can easily generate all kind of art from drawing, painting, sketch, or even a specific artist style just using a t

Muhammad Fathy Rashad 643 Dec 30, 2022
[BMVC2021] "TransFusion: Cross-view Fusion with Transformer for 3D Human Pose Estimation"

TransFusion-Pose TransFusion: Cross-view Fusion with Transformer for 3D Human Pose Estimation Haoyu Ma, Liangjian Chen, Deying Kong, Zhe Wang, Xingwei

Haoyu Ma 29 Dec 23, 2022
Generative Models as a Data Source for Multiview Representation Learning

GenRep Project Page | Paper Generative Models as a Data Source for Multiview Representation Learning Ali Jahanian, Xavier Puig, Yonglong Tian, Phillip

Ali 81 Dec 03, 2022
Apache Flink

Apache Flink Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities. Learn more about Flin

The Apache Software Foundation 20.4k Dec 30, 2022
Demonstration of transfer of knowledge and generalization with distillation

Distilling-the-Knowledge-in-a-Neural-Network This is an implementation of a part of the paper "Distilling the Knowledge in a Neural Network" (https://

26 Nov 25, 2022
pytorch implementation of trDesign

trdesign-pytorch This repository is a PyTorch implementation of the trDesign paper based on the official TensorFlow implementation. The initial port o

Learn Ventures Inc. 41 Dec 29, 2022
PyTorch code for the "Deep Neural Networks with Box Convolutions" paper

Box Convolution Layer for ConvNets Single-box-conv network (from `examples/mnist.py`) learns patterns on MNIST What This Is This is a PyTorch implemen

Egor Burkov 515 Dec 18, 2022
Model Zoo for AI Model Efficiency Toolkit

We provide a collection of popular neural network models and compare their floating point and quantized performance.

Qualcomm Innovation Center 137 Jan 03, 2023
A neuroanatomy-based augmented reality experience powered by computer vision. Features 3D visuals of the Atlas Brain Map slices.

Brain Augmented Reality (AR) A neuroanatomy-based augmented reality experience powered by computer vision that features 3D visuals of the Atlas Brain

Yasmeen Brain 10 Oct 06, 2022
Tree Nested PyTorch Tensor Lib

DI-treetensor treetensor is a generalized tree-based tensor structure mainly developed by OpenDILab Contributors. Almost all the operation can be supp

OpenDILab 167 Dec 29, 2022
FinEAS: Financial Embedding Analysis of Sentiment 📈

FinEAS: Financial Embedding Analysis of Sentiment 📈 (SentenceBERT for Financial News Sentiment Regression) This repository contains the code for gene

LHF Labs 31 Dec 13, 2022
Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

ChongjianGE 89 Dec 02, 2022
An implementation of Deep Forest 2021.2.1.

Deep Forest (DF) 21 DF21 is an implementation of Deep Forest 2021.2.1. It is designed to have the following advantages: Powerful: Better accuracy than

LAMDA Group, Nanjing University 795 Jan 03, 2023
A Unified Framework and Analysis for Structured Knowledge Grounding

UnifiedSKG 📚 : Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models Code for paper UnifiedSKG: Unifying and Mu

HKU NLP Group 370 Dec 21, 2022
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

3k Jan 08, 2023