Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Overview

Vision recognition using LOBE AI and Azure Functions

License: MIT Twitter: elbruno GitHub: elbruno

During the last couple of months, I’ve having fun with my new friends at home: 🐿️ 🐿️ 🐿️ . These little ones, are extremelly funny, and they literally don’t care about the cold 🥶 ❄️ ☃️ .

So, I decided to help them and build an Automatic Feeder using Azure IoT, a Wio Terminal and maybe some more devices. You can check the Azure IoT project here Azure IoT - Squirrel Feeder.

Once the feeder was ready, I decided to add a new feature to the scenario, detecting when a squirrel 🐿️ is nearby the feeder. In this repository I'll share:

  • How to create an image recognition model using LOBE.
  • How to export the model to a TensorFlow image format.
  • How to run the model in an Azure Function.

LOBE AI

LOBE is a free, easy-to-use Microsoft desktop application that allows you to build, manage, and use custom machine learning models. With Lobe, you can create an image classification model to categorize images into labels that represent their content.

Here's a summary of how to prepare a model in Lobe:

  • Import and label images.
  • Train your model.
  • Evaluate training results.
  • Play with your model to experiment with different scenarios.
  • Export and use your model in an app.

The Overview of image classification model by Lobe section contains step-by-step instructions that let you make calls to the service and get results in a short period of time.

You can use the images in the "LOBE/Train/" directory in this repository to train your model.

Here is the model performing live recognition in action:

Exporting the model to TensorFlow

Once the project was trained, you can export it to several formats. We will use a TensorFlow format for the Azure Function.

The exported model has several files. The following list shows the files that we use in our Azure Function:

  • labels.txt: The labels that the model recognizes
  • saved_model.pb: The model definition
  • signature.json: The model signature
  • example/tf_example.py.py: sample python code that uses the exported model.

You can check the exported model in the "Lobe/ExportedModel" directory in this repository.

Azure Function

Time to code! Let's create a new Azure Function Using Visual Studio Code and the Azure Functions for Visual Studio Code extension.

Changes to __ init __.py

The following code is the final code for the __ init __.py file in the Azure Function.

A couple of notes:

  • The function will receive a POST request with the file bytes in the body.
  • In order to use the tf_model_helper file, we must import the tf_model_helper.py function from the tf_model_helper.py file using ".tf_model_helper"
  • ASSETS_PATH and TF_MODEL are the variables that we will use to access the exported model. We will use os.path to resolve the current path to the exported model.
  • The result of the function will be a JSON string with the prediction. Jsonify will convert the TF_Model() image prediction to a JSON string.
import logging
import azure.functions as func

# Imports for image procesing
import io
import os
from PIL import Image
from flask import Flask, jsonify

# Imports for prediction
from .tf_model_helper import TFModel

def main(req: func.HttpRequest) -> func.HttpResponse:
    logging.info('Python HTTP trigger function processed a request.')

    results = "{}"
    try:
        # get and load image from POST
        image_bytes = req.get_body()    
        image = Image.open(io.BytesIO(image_bytes))

        # Load and intialize the model and the app context
        app = Flask(__name__)  

        # load LOBE Model using the current directory
        scriptpath = os.path.abspath(__file__)
        scriptdir  = os.path.dirname(scriptpath)
        ASSETS_PATH = os.path.join(scriptdir, "model")
        TF_MODEL = TFModel(ASSETS_PATH)

        with app.app_context():        
            # prefict image and process results in json string format
            results = TF_MODEL.predict(image)            
            jsonresult = jsonify(results)
            jsonStr = jsonresult.get_data(as_text=True)
            results = jsonStr

    except Exception as e:
        logging.info(f'exception: {e}')
        pass 

    # return results
    logging.info('Image processed. Results: ' + results)
    return func.HttpResponse(results, status_code=200)

Changes to requirements.txt

The requirements.txt file will define the necessary libraries for the Azure Function. We will use the following libraries:

# DO NOT include azure-functions-worker in this file
# The Python Worker is managed by Azure Functions platform
# Manually managing azure-functions-worker may cause unexpected issues

azure-functions
requests
Pillow
numpy
flask
tensorflow
opencv-python

Sample Code

You can view a sample function completed code in the "AzureFunction/LobeSquirrelDetectorFunction/" directory in this repository.

Testing the sample

Once our code is complete we can test the sample in local mode or in Azure Functions, after we deploy the Function. In both scenarios we can use any tool or language that can perform HTTP POST requests to the Azure Function to test our function.

Test using Curl

Curl is a command line tool that allows you to send HTTP requests to a server. It is a very simple tool that can be used to send HTTP requests to a server. We can test the local function using curl with the following command:

❯ curl http://localhost:7071/api/LobeSquirrelDetectorFunction -Method 'Post' -InFile 01.jpg

Test using Postman

Postman is a great tool to test our function. You can use it to test the function in local mode and also to test the function once it has been deployed to Azure Functions. You can download Postman here.

In order to test our function we need to know the function url. In Visual Studio Code, we can get the url by clicking on the Functions section in the Azure Extension. Then we can right click on the function and select "Copy Function URL".

Now we can go to Postman and create a new POST request using our function url. We can also add the image we want to test. Here is a live demo, with the function running locally, in Debug mode in Visual Studio Code:

We are now ready to test our function in Azure Functions. To do so we need to deploy the function to Azure Functions. And use the new Azure Function url with the same test steps.

Additional Resources

You can check a session recording about this topic in English and Spanish.

These links will help to understand specific implementations of the sample code:

In my personal blog "ElBruno.com", I wrote about several scenarios on how to work and code with LOBE.

Author

👤 Bruno Capuano

🤝 Contributing

Contributions, issues and feature requests are welcome!

Feel free to check issues page.

Show your support

Give a ⭐️ if this project helped you!

📝 License

Copyright © 2021 Bruno Capuano.

This project is MIT licensed.


Owner
El Bruno
Sr Cloud Advocate @Microsoft, former Microsoft MVP (14 years!), lazy runner, lazy podcaster, technology enthusiast
El Bruno
Unsupervised Video Interpolation using Cycle Consistency

Unsupervised Video Interpolation using Cycle Consistency Project | Paper | YouTube Unsupervised Video Interpolation using Cycle Consistency Fitsum A.

NVIDIA Corporation 100 Nov 30, 2022
🚩🚩🚩

My CTF Challenges 2021 AIS3 Pre-exam / MyFirstCTF Name Category Keywords Difficulty ⒸⓄⓋⒾⒹ-①⑨ (MyFirstCTF Only) Reverse Baby ★ Piano Reverse C#, .NET ★

6 Oct 28, 2021
An expansion for RDKit to read all types of files in one line

RDMolReader An expansion for RDKit to read all types of files in one line How to use? Add this single .py file to your project and import MolFromFile(

Ali Khodabandehlou 1 Dec 18, 2021
Deep Learning (with PyTorch)

Deep Learning (with PyTorch) This notebook repository now has a companion website, where all the course material can be found in video and textual for

Alfredo Canziani 6.2k Jan 07, 2023
code release for USENIX'22 paper `On the Security Risks of AutoML`

This project is a minimized runnable project cut from trojanzoo, which contains more datasets, models, attacks and defenses. This repo will not be mai

Ren Pang 5 Apr 19, 2022
VM3000 Microphones

VM3000-Microphones This project was completed by Ricky Leman under the supervision of Dr Ben Travaglione and Professor Melinda Hodkiewicz as part of t

UWA System Health Lab 0 Jun 04, 2021
DeepMoCap: Deep Optical Motion Capture using multiple Depth Sensors and Retro-reflectors

DeepMoCap: Deep Optical Motion Capture using multiple Depth Sensors and Retro-reflectors By Anargyros Chatzitofis, Dimitris Zarpalas, Stefanos Kollias

tofis 24 Oct 08, 2022
Using pytorch to implement unet network for liver image segmentation.

Using pytorch to implement unet network for liver image segmentation.

zxq 1 Dec 17, 2021
Like Dirt-Samples, but cleaned up

Clean-Samples Like Dirt-Samples, but cleaned up, with clear provenance and license info (generally a permissive creative commons licence but check the

TidalCycles 39 Nov 30, 2022
JudeasRx - graphical app for doing personalized causal medicine using the methods invented by Judea Pearl et al.

JudeasRX Instructions Read the references given in the Theory and Notation section below Fire up the Jupyter Notebook judeas-rx.ipynb The notebook dra

Robert R. Tucci 19 Nov 07, 2022
Code for database and frontend of webpage for Neural Fields in Visual Computing and Beyond.

Neural Fields in Visual Computing—Complementary Webpage This is based on the amazing MiniConf project from Hendrik Strobelt and Sasha Rush—thank you!

Brown University Visual Computing Group 29 Nov 30, 2022
The Face Mask recognition system uses AI technology to detect the person with or without a mask.

Face Mask Detection Face Mask Detection system built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect

Rohan Kasabe 4 Apr 05, 2022
FairMOT for Multi-Class MOT using YOLOX as Detector

FairMOT-X Project Overview FairMOT-X is a multi-class multi object tracker, which has been tailored for training on the BDD100K MOT Dataset. It makes

Jonathan Tan 33 Dec 28, 2022
kullanışlı ve işinizi kolaylaştıracak bir araç

Hey merhaba! işte çok sorulan sorularının cevabı ve sorunlarının çözümü; Soru= İçinde var denilen birçok şeyi göremiyorum bunun sebebi nedir? Cevap= B

Sexettin 16 Dec 17, 2022
Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021, Pytorch)

S2VD Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021) Requirements and Dependencies Ubuntu 16.04, cuda 10.0 Python 3.6.10, P

Zongsheng Yue 53 Nov 23, 2022
Implementation of ConvMixer for "Patches Are All You Need? 🤷"

Patches Are All You Need? 🤷 This repository contains an implementation of ConvMixer for the ICLR 2022 submission "Patches Are All You Need?" by Asher

CMU Locus Lab 934 Jan 08, 2023
An automated facial recognition based attendance system (desktop application)

Facial_Recognition_based_Attendance_System An automated facial recognition based attendance system (desktop application) Made using Python, Tkinter an

1 Jun 21, 2022
10x faster matrix and vector operations

Bolt is an algorithm for compressing vectors of real-valued data and running mathematical operations directly on the compressed representations. If yo

2.3k Jan 09, 2023
A PyTorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

A PyTorch implementation of V-Net Vnet is a PyTorch implementation of the paper V-Net: Fully Convolutional Neural Networks for Volumetric Medical Imag

Matthew Macy 606 Dec 21, 2022
This repository comes with the paper "On the Robustness of Counterfactual Explanations to Adverse Perturbations"

Robust Counterfactual Explanations This repository comes with the paper "On the Robustness of Counterfactual Explanations to Adverse Perturbations". I

Marco 5 Dec 20, 2022