Potato Disease Classification - Training, Rest APIs, and Frontend to test.

Overview

Potato Disease Classification

Setup for Python:

  1. Install Python (Setup instructions)

  2. Install Python packages

pip3 install -r training/requirements.txt
pip3 install -r api/requirements.txt
  1. Install Tensorflow Serving (Setup instructions)

Setup for ReactJS

  1. Install Nodejs (Setup instructions)
  2. Install NPM (Setup instructions)
  3. Install dependencies
cd frontend
npm install --from-lock-json
npm audit fix
  1. Copy .env.example as .env.

  2. Change API url in .env.

Setup for React-Native app

  1. Go to the React Native environment setup, then select React Native CLI Quickstart tab.

  2. Install dependencies

cd mobile-app
yarn install
  • 2.1 Only for mac users
cd ios && pod install && cd ../
  1. Copy .env.example as .env.

  2. Change API url in .env.

Training the Model

  1. Download the data from kaggle.
  2. Only keep folders related to Potatoes.
  3. Run Jupyter Notebook in Browser.
jupyter notebook
  1. Open training/potato-disease-training.ipynb in Jupyter Notebook.
  2. In cell #2, update the path to dataset.
  3. Run all the Cells one by one.
  4. Copy the model generated and save it with the version number in the models folder.

Running the API

Using FastAPI

  1. Get inside api folder
cd api
  1. Run the FastAPI Server using uvicorn
uvicorn main:app --reload --host 0.0.0.0
  1. Your API is now running at 0.0.0.0:8000

Using FastAPI & TF Serve

  1. Get inside api folder
cd api
  1. Copy the models.config.example as models.config and update the paths in file.
  2. Run the TF Serve (Update config file path below)
docker run -t --rm -p 8501:8501 -v C:/Code/potato-disease-classification:/potato-disease-classification tensorflow/serving --rest_api_port=8501 --model_config_file=/potato-disease-classification/models.config
  1. Run the FastAPI Server using uvicorn For this you can directly run it from your main.py or main-tf-serving.py using pycharm run option (as shown in the video tutorial) OR you can run it from command prompt as shown below,
uvicorn main-tf-serving:app --reload --host 0.0.0.0
  1. Your API is now running at 0.0.0.0:8000

Running the Frontend

  1. Get inside api folder
cd frontend
  1. Copy the .env.example as .env and update REACT_APP_API_URL to API URL if needed.
  2. Run the frontend
npm run start

Running the app

  1. Get inside mobile-app folder
cd mobile-app
  1. Copy the .env.example as .env and update URL to API URL if needed.

  2. Run the app (android/iOS)

npm run android

or

npm run ios
  1. Creating public (signed APK)

Creating the TF Lite Model

  1. Run Jupyter Notebook in Browser.
jupyter notebook
  1. Open training/tf-lite-converter.ipynb in Jupyter Notebook.
  2. In cell #2, update the path to dataset.
  3. Run all the Cells one by one.
  4. Model would be saved in tf-lite-models folder.

Deploying the TF Lite on GCP

  1. Create a GCP account.
  2. Create a Project on GCP (Keep note of the project id).
  3. Create a GCP bucket.
  4. Upload the potatoes.h5 model in the bucket in the path models/potatos.h5.
  5. Install Google Cloud SDK (Setup instructions).
  6. Authenticate with Google Cloud SDK.
gcloud auth login
  1. Run the deployment script.
cd gcp
gcloud functions deploy predict_lite --runtime python38 --trigger-http --memory 512 --project project_id
  1. Your model is now deployed.
  2. Use Postman to test the GCF using the Trigger URL.

Inspiration: https://cloud.google.com/blog/products/ai-machine-learning/how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-functions

Deploying the TF Model (.h5) on GCP

  1. Create a GCP account.
  2. Create a Project on GCP (Keep note of the project id).
  3. Create a GCP bucket.
  4. Upload the tf .h5 model generate in the bucket in the path models/potato-model.h5.
  5. Install Google Cloud SDK (Setup instructions).
  6. Authenticate with Google Cloud SDK.
gcloud auth login
  1. Run the deployment script.
cd gcp
gcloud functions deploy predict --runtime python38 --trigger-http --memory 512 --project project_id
  1. Your model is now deployed.
  2. Use Postman to test the GCF using the Trigger URL.

Inspiration: https://cloud.google.com/blog/products/ai-machine-learning/how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-functions

Owner
codebasics
codebasics
Final project code: Implementing BicycleGAN, for CIS680 FA21 at University of Pennsylvania

680 Final Project: BicycleGAN Haoran Tang Instructions 1. Training To train the network, please run train.py. Change hyper-parameters and folder paths

Haoran Tang 0 Apr 22, 2022
The implementation of the algorithm in the paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020.

DS3L This is the code for paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020. Setups The code is implem

Guolz 36 Oct 19, 2022
Beyond Image to Depth: Improving Depth Prediction using Echoes (CVPR 2021)

Beyond Image to Depth: Improving Depth Prediction using Echoes (CVPR 2021) Kranti Kumar Parida, Siddharth Srivastava, Gaurav Sharma. We address the pr

Kranti Kumar Parida 33 Jun 27, 2022
BisQue is a web-based platform designed to provide researchers with organizational and quantitative analysis tools for 5D image data. Users can extend BisQue by implementing containerized ML workflows.

Overview BisQue is a web-based platform specifically designed to provide researchers with organizational and quantitative analysis tools for up to 5D

Vision Research Lab @ UCSB 26 Nov 29, 2022
NanoDet-Plusāš”Super fast and lightweight anchor-free object detection model. šŸ”„Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphonešŸ”„

NanoDet-Plusāš”Super fast and lightweight anchor-free object detection model. šŸ”„Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphonešŸ”„

4.8k Jan 07, 2023
Research on Event Accumulator Settings for Event-Based SLAM

Research on Event Accumulator Settings for Event-Based SLAM This is the source code for paper "Research on Event Accumulator Settings for Event-Based

Robin Shaun 26 Dec 21, 2022
Code corresponding to The Introspective Agent: Interdependence of Strategy, Physiology, and Sensing for Embodied Agents

The Introspective Agent: Interdependence of Strategy, Physiology, and Sensing for Embodied Agents This is the code corresponding to The Introspective

0 Jan 10, 2022
NER for Indian languages

CL-NERIL: A Cross-Lingual Model for NER in Indian Languages Code for the paper - https://arxiv.org/abs/2111.11815 Setup Setup a virtual environment Th

Akshara P 0 Nov 24, 2021
TensorFlow port of PyTorch Image Models (timm) - image models with pretrained weights.

TensorFlow-Image-Models Introduction Usage Models Profiling License Introduction TensorfFlow-Image-Models (tfimm) is a collection of image models with

Martins Bruveris 227 Dec 20, 2022
Examples of how to create colorful, annotated equations in Latex using Tikz.

The file "eqn_annotate.tex" is the main latex file. This repository provides four examples of annotated equations: [example_prob.tex] A simple one ins

SyNeRCyS Research Lab 3.2k Jan 05, 2023
Official code for "Mean Shift for Self-Supervised Learning"

MSF Official code for "Mean Shift for Self-Supervised Learning" Requirements Python = 3.7.6 PyTorch = 1.4 torchvision = 0.5.0 faiss-gpu = 1.6.1 In

UMBC Vision 44 Nov 21, 2022
Adversarially Learned Inference

Adversarially Learned Inference Code for the Adversarially Learned Inference paper. Compiling the paper locally From the repo's root directory, $ cd p

Mohamed Ishmael Belghazi 308 Sep 24, 2022
Nonnegative spatial factorization for multivariate count data

Nonnegative spatial factorization for multivariate count data This repository contains supporting code to facilitate reproducible analysis. For detail

Will Townes 24 Dec 19, 2022
Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection

Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection Main requirements torch = 1.0 torchvision = 0.2.0 Python 3 Environm

15 Apr 04, 2022
SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP

scdlpicker SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP Objective This is a simple deep learning (DL) repicker module

Joachim Saul 6 May 13, 2022
Benchmarks for Object Detection in Aerial Images

Benchmarks for Object Detection in Aerial Images

Jian Ding 691 Dec 30, 2022
PyTorch implementation of neural style transfer algorithm

neural-style-pt This is a PyTorch implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias

770 Jan 02, 2023
Official implementation of "Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks", NeurIPS 2021.

PHDimGeneralization Official implementation of "Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks", NeurIPS 2021. Overvie

Tolga Birdal 13 Nov 08, 2022
Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology (LMRL Workshop, NeurIPS 2021)

Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology Self-Supervised Vision Transformers Learn Visual Concepts in Histopatholog

Richard Chen 95 Dec 24, 2022
A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud.

Lidar with Velocity A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud. related paper: Lidar with Velocity : Motion

ISEE Research Group 164 Dec 30, 2022