Model Zoo for AI Model Efficiency Toolkit

Overview

Qualcomm Innovation Center, Inc.

Model Zoo for AI Model Efficiency Toolkit

We provide a collection of popular neural network models and compare their floating point and quantized performance. Results demonstrate that quantized models can provide good accuracy, comparable to floating point models. Together with results, we also provide recipes for users to quantize floating-point models using the AI Model Efficiency ToolKit (AIMET).

Table of Contents

Introduction

Quantized inference is significantly faster than floating-point inference, and enables models to run in a power-efficient manner on mobile and edge devices. We use AIMET, a library that includes state-of-the-art techniques for quantization, to quantize various models available in TensorFlow and PyTorch frameworks. The list of models is provided in the sections below.

An original FP32 source model is quantized either using post-training quantization (PTQ) or Quantization-Aware-Training (QAT) technique available in AIMET. Example scripts for evaluation are provided for each model. When PTQ is needed, the evaluation script performs PTQ before evaluation. Wherever QAT is used, the fine-tuned model checkpoint is also provided.

Tensorflow Models

Model Zoo

Network Model Source [1] Floating Pt (FP32) Model [2] Quantized Model [3] Results [4] Documentation
ResNet-50 (v1) GitHub Repo Pretrained Model See Documentation (ImageNet) Top-1 Accuracy
FP32: 75.21%
INT8: 74.96%
ResNet50.md
MobileNet-v2-1.4 GitHub Repo Pretrained Model Quantized Model (ImageNet) Top-1 Accuracy
FP32: 75%
INT8: 74.21%
MobileNetV2.md
EfficientNet Lite GitHub Repo Pretrained Model Quantized Model (ImageNet) Top-1 Accuracy
FP32: 74.93%
INT8: 74.99%
EfficientNetLite.md
SSD MobileNet-v2 GitHub Repo Pretrained Model See Example (COCO) Mean Avg. Precision (mAP)
FP32: 0.2469
INT8: 0.2456
SSDMobileNetV2.md
RetinaNet GitHub Repo Pretrained Model See Example (COCO) mAP
FP32: 0.35
INT8: 0.349
Detailed Results
RetinaNet.md
Pose Estimation Based on Ref. Based on Ref. Quantized Model (COCO) mAP
FP32: 0.383
INT8: 0.379,
Mean Avg.Recall (mAR)
FP32: 0.452
INT8: 0.446
PoseEstimation.md
SRGAN GitHub Repo Pretrained Model See Example (BSD100) PSNR/SSIM
FP32: 25.45/0.668
INT8: 24.78/0.628
INT8W/INT16Act.: 25.41/0.666
Detailed Results
SRGAN.md

[1] Original FP32 model source
[2] FP32 model checkpoint
[3] Quantized Model: For models quantized with post-training technique, refers to FP32 model which can then be quantized using AIMET. For models optimized with QAT, refers to model checkpoint with fine-tuned weights. 8-bit weights and activations are typically used. For some models, 8-bit weights and 16-bit activations (INT8W/INT16Act.) are used to further improve performance of post-training quantization.
[4] Results comparing float and quantized performance
[5] Script for quantized evaluation using the model referenced in “Quantized Model” column

Detailed Results

RetinaNet

(COCO dataset)

Average Precision/Recall @[ IoU | area | maxDets] FP32 INT8
Average Precision @[ 0.50:0.95 | all | 100 ] 0.350 0.349
Average Precision @[ 0.50 | all | 100 ] 0.537 0.536
Average Precision @[ 0.75 | all | 100 ] 0.374 0.372
Average Precision @[ 0.50:0.95 | small | 100 ] 0.191 0.187
Average Precision @[ 0.50:0.95 | medium | 100 ] 0.383 0.381
Average Precision @[ 0.50:0.95 | large | 100 ] 0.472 0.472
Average Recall @[ 0.50:0.95 | all | 1 ] 0.306 0.305
Average Recall @[0.50:0.95 | all | 10 ] 0.491 0.490
Average Recall @[ 0.50:0.95 | all |100 ] 0.533 0.532
Average Recall @[ 0.50:0.95 | small | 100 ] 0.345 0.341
Average Recall @[ 0.50:0.95 | medium | 100 ] 0.577 0.577
Average Recall @[ 0.50:0.95 | large | 100 ] 0.681 0.679

SRGAN

Model Dataset PSNR SSIM
FP32 Set5/Set14/BSD100 29.17/26.17/25.45 0.853/0.719/0.668
INT8/ACT8 Set5/Set14/BSD100 28.31/25.55/24.78 0.821/0.684/0.628
INT8/ACT16 Set5/Set14/BSD100 29.12/26.15/25.41 0.851/0.719/0.666

PyTorch Models

Model Zoo

Network Model Source [1] Floating Pt (FP32) Model [2] Quantized Model [3] Results [4] Documentation
MobileNetV2 GitHub Repo Pretrained Model Quantized Model (ImageNet) Top-1 Accuracy
FP32: 71.67%
INT8: 71.14%
MobileNetV2.md
EfficientNet-lite0 GitHub Repo Pretrained Model Quantized Model (ImageNet) Top-1 Accuracy
FP32: 75.42%
INT8: 74.44%
EfficientNet-lite0.md
DeepLabV3+ GitHub Repo Pretrained Model Quantized Model (PascalVOC) mIOU
FP32: 72.62%
INT8: 72.22%
DeepLabV3.md
MobileNetV2-SSD-Lite GitHub Repo Pretrained Model Quantized Model (PascalVOC) mAP
FP32: 68.7%
INT8: 68.6%
MobileNetV2-SSD-lite.md
Pose Estimation Based on Ref. Based on Ref. Quantized Model (COCO) mAP
FP32: 0.364
INT8: 0.359
mAR
FP32: 0.436
INT8: 0.432
PoseEstimation.md
SRGAN GitHub Repo Pretrained Model (older version from here) See Example (BSD100) PSNR/SSIM
FP32: 25.51/0.653
INT8: 25.5/0.648
Detailed Results
SRGAN.md
DeepSpeech2 GitHub Repo Pretrained Model See Example (Librispeech Test Clean) WER
FP32
9.92%
INT8: 10.22%
DeepSpeech2.md

[1] Original FP32 model source
[2] FP32 model checkpoint
[3] Quantized Model: For models quantized with post-training technique, refers to FP32 model which can then be quantized using AIMET. For models optimized with QAT, refers to model checkpoint with fine-tuned weights. 8-bit weights and activations are typically used. For some models, 8-bit weights and 16-bit weights are used to further improve performance of post-training quantization.
[4] Results comparing float and quantized performance
[5] Script for quantized evaluation using the model referenced in “Quantized Model” column

Detailed Results

SRGAN Pytorch

Model Dataset PSNR SSIM
FP32 Set5/Set14/BSD100 29.93/26.58/25.51 0.851/0.709/0.653
INT8 Set5/Set14/BSD100 29.86/26.59/25.55 0.845/0.705/0.648

Examples

Install AIMET

Before you can run the example script for a specific model, you need to install the AI Model Efficiency ToolKit (AIMET) software. Please see this Getting Started page for an overview. Then install AIMET and its dependencies using these Installation instructions.

NOTE: To obtain the exact version of AIMET software that was used to test this model zoo, please install release 1.13.0 when following the above instructions.

Running the scripts

Download the necessary datasets and code required to run the example for the model of interest. The examples run quantized evaluation and if necessary apply AIMET techniques to improve quantized model performance. They generate the final accuracy results noted in the table above. Refer to the Docs for TensorFlow or PyTorch folder to access the documentation and procedures for a specific model.

Team

AIMET Model Zoo is a project maintained by Qualcomm Innovation Center, Inc.

License

Please see the LICENSE file for details.

Comments
  • Added PyTorch FFNet model, added INT4 to several models

    Added PyTorch FFNet model, added INT4 to several models

    Added the following new model: PyTorch FFNet Added INT4 quantization support to the following models:

    • Pytorch Classification (regnet_x_3_2gf, resnet18, resnet50)
    • PyTorch HRNet Posenet
    • PyTorch HRNet
    • PyTorch EfficientNet Lite0
    • PyTorch DeeplabV3-MobileNetV2

    Signed-off-by: Bharath Ramaswamy [email protected]

    opened by quic-bharathr 0
  • Added TensorFlow ModuleDet-EdgeTPU and PyToch InverseForm models

    Added TensorFlow ModuleDet-EdgeTPU and PyToch InverseForm models

    Added two new models - TensorFlow ModuleDet-EdgeTPU and PyToch InverseForm models Fixed TF version for 2 models in README file Minor updates to Tensorflow EfficientNet Lite-0 doc and PyTorch ssd_mobilenetv2 script

    Signed-off-by: Bharath Ramaswamy [email protected]

    opened by quic-bharathr 0
  • Updated post estimation evaluation code and documentation for updated…

    Updated post estimation evaluation code and documentation for updated…

    … model .pth file with weights state-dict Fixed model loading problem by including model definition in pose_estimation_quanteval.py Add Quantizer Op Assumptions to Pose Estimation document

    Signed-off-by: Bharath Ramaswamy [email protected]

    opened by quic-bharathr 0
  • error when run the pose estimation example

    error when run the pose estimation example

    $ python3.6 pose_estimation_quanteval.py pe_weights.pth ./data/

    2022-05-24 22:37:22,500 - root - INFO - AIMET defining network with shared weights Traceback (most recent call last): File "pose_estimation_quanteval.py", line 700, in pose_estimation_quanteval(args) File "pose_estimation_quanteval.py", line 687, in pose_estimation_quanteval sim = quantsim.QuantizationSimModel(model, dummy_input=(1, 3, 128, 128), quant_scheme=args.quant_scheme) File "/home/jlchen/.local/lib/python3.6/site-packages/aimet_torch/quantsim.py", line 157, in init self.connected_graph = ConnectedGraph(self.model, dummy_input) File "/home/jlchen/.local/lib/python3.6/site-packages/aimet_torch/meta/connectedgraph.py", line 132, in init self._construct_graph(model, model_input) File "/home/jlchen/.local/lib/python3.6/site-packages/aimet_torch/meta/connectedgraph.py", line 254, in _construct_graph module_tensor_shapes_map = ConnectedGraph._generate_module_tensor_shapes_lookup_table(model, model_input) File "/home/jlchen/.local/lib/python3.6/site-packages/aimet_torch/meta/connectedgraph.py", line 244, in _generate_module_tensor_shapes_lookup_table run_hook_for_layers_with_given_input(model, model_input, forward_hook, leaf_node_only=False) File "/home/jlchen/.local/lib/python3.6/site-packages/aimet_torch/utils.py", line 277, in run_hook_for_layers_with_given_input _ = model(*input_tensor) File "/home/jlchen/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1071, in _call_impl result = forward_call(*input, **kwargs) TypeError: forward() takes 2 positional arguments but 5 were given

    opened by sundyCoder 0
  • I try to quantize deepspeech demo,but error happend

    I try to quantize deepspeech demo,but error happend

    ImportError: /home/mi/anaconda3/envs/aimet/lib/python3.7/site-packages/aimet_common/x86_64-linux-gnu/aimet_tensor_quantizer-0.0.0-py3.7-linux-x86_64.egg/AimetTensorQuantizer.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor8data_ptrIfEEPT_v

    platform:Ubuntu 18.04 GPU: nvidia 2070 CUDA:11.1 pytorch python:3.7

    opened by fmbao 0
  • Request for the MobileNet-V1-1.0 quantized (INT8) model.

    Request for the MobileNet-V1-1.0 quantized (INT8) model.

    Thank you for sharing these valuable models. I'd like to evaluate and look into the 'MobileNet-v1-1.0' model quantized by the DFQ. I'd appreciate it if you could provide the quantized MobileNet-v1-1.0 model either in TF or in PyTorch.

    opened by yschoi-dev 0
  • What's the runtime and AI Framework for DeepSpeech2?

    What's the runtime and AI Framework for DeepSpeech2?

    For DeepSpeech2, may I know what's the runtime for it's quantized (INT8 ) model, Hexagan DSP, NPU or others? And what's the AI framework, SNPE, Hexagan NN or others? Thanks~

    opened by sunfangxun 0
  • Unable to replicate DeepLabV3 Pytorch Tutorial numbers

    Unable to replicate DeepLabV3 Pytorch Tutorial numbers

    I've been working through the DeepLabV3 Pytorch tutorial, which can be founded here: https://github.com/quic/aimet-model-zoo/blob/develop/zoo_torch/Docs/DeepLabV3.md.

    However, when running the evaluation script using optimized checkpoint, I am unable to replicate the mIOU result that was listed in the table. The number that I got was 0.67 while the number reported by Qualcomm was 0.72. I was wondering if anyone have had this issue before and how to resolve it ?

    opened by LLNLanLeN 3
Releases(repo_restructured_1)
Owner
Qualcomm Innovation Center
Qualcomm Innovation Center
The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021

Directed Graph Contrastive Learning The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL). In this paper, we present the first con

Tong Zekun 28 Jan 08, 2023
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance [Video Demo] [Paper] Installation Requirements Python 3.6 PyTorch 1.1.0 Pleas

Jiachen Xu 19 Oct 28, 2022
Autoencoders pretraining using clustering

Autoencoders pretraining using clustering

IITiS PAN 2 Dec 16, 2021
a reimplementation of Holistically-Nested Edge Detection in PyTorch

pytorch-hed This is a personal reimplementation of Holistically-Nested Edge Detection [1] using PyTorch. Should you be making use of this work, please

Simon Niklaus 375 Dec 06, 2022
Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM)

Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM) Introduction The average lifetime of the $D^{0}$ me

Son Gyo Jung 1 Dec 17, 2021
The official code for paper "R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling".

R2D2 This is the official code for paper titled "R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Mode

Alipay 49 Dec 17, 2022
Event-forecasting - Event Forecasting Algorithms With Python

event-forecasting Event Forecasting Algorithms Theory Correlating events in comp

Intellia ICT 4 Feb 15, 2022
Public Code for NIPS submission SimiGrad: Fine-Grained Adaptive Batching for Large ScaleTraining using Gradient Similarity Measurement

Public code for NIPS submission "SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training using Gradient Similarity Measurement" This repo co

Heyang Qin 0 Oct 13, 2021
An AutoML Library made with Optuna and PyTorch Lightning

An AutoML Library made with Optuna and PyTorch Lightning Installation Recommended pip install -U gradsflow From source pip install git+https://github.

GradsFlow 294 Dec 17, 2022
Udacity Suse Cloud Native Foundations Scholarship Course Walkthrough

SUSE Cloud Native Foundations Scholarship Udacity is collaborating with SUSE, a global leader in true open source solutions, to empower developers and

Shivansh Srivastava 34 Oct 18, 2022
MLJetReconstruction - using machine learning to reconstruct jets for CMS

MLJetReconstruction - using machine learning to reconstruct jets for CMS The C++ data extraction code used here was based heavily on that foundv here.

ALPhA Davidson 0 Nov 17, 2021
Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation

VT-UNet This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet. Environmen

Himashi Amanda Peiris 114 Dec 20, 2022
Easy genetic ancestry predictions in Python

ezancestry Easily visualize your direct-to-consumer genetics next to 2500+ samples from the 1000 genomes project. Evaluate the performance of a custom

Kevin Arvai 38 Jan 02, 2023
Optimal Camera Position for a Practical Application of Gaze Estimation on Edge Devices,

Optimal Camera Position for a Practical Application of Gaze Estimation on Edge Devices, Linh Van Ma, Tin Trung Tran, Moongu Jeon, ICAIIC 2022 (The 4th

Linh 11 Oct 10, 2022
PyTorch implementation of MICCAI 2018 paper "Liver Lesion Detection from Weakly-labeled Multi-phase CT Volumes with a Grouped Single Shot MultiBox Detector"

Grouped SSD (GSSD) for liver lesion detection from multi-phase CT Note: the MICCAI 2018 paper only covers the multi-phase lesion detection part of thi

Sang-gil Lee 36 Oct 12, 2022
AfriBERTa: Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages

AfriBERTa: Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages This repository contains the code for the pa

Kelechi 40 Nov 24, 2022
Package for working with hypernetworks in PyTorch.

Package for working with hypernetworks in PyTorch.

Christian Henning 71 Jan 05, 2023
Code for the paper Hybrid Spectrogram and Waveform Source Separation

Demucs Music Source Separation This is the 3rd release of Demucs (v3), featuring hybrid source separation. For the waveform only Demucs (v2): Go this

Meta Research 4.8k Jan 04, 2023
Repo for EchoVPR: Echo State Networks for Visual Place Recognition

EchoVPR Repo for EchoVPR: Echo State Networks for Visual Place Recognition Currently under development Dirs: data: pre-collected hidden representation

Anil Ozdemir 4 Oct 04, 2022
Blender scripts for computing geodesic distance

GeoDoodle Geodesic distance computation for Blender meshes Table of Contents Overivew Usage Implementation Overview This addon provides an operator fo

20 Jun 08, 2022