The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment

Overview

Hailo Model Zoo

The Hailo Model Zoo provides pre-trained models for high-performance deep learning applications. Using the Hailo Model Zoo you can measure the full precision accuracy of each model, the quantized accuracy using the Hailo Emulator and measure the accuracy on the Hailo-8 device. Finally, you will be able to generate the Hailo Executable Format (HEF) binary file to speed-up development and generate high quality applications accelerated with Hailo-8. The models are optimized for high accuracy on public datasets and can be used to benchmark the Hailo quantization scheme.

Usage

Quick Start Guide

  • Install the Hailo Dataflow Compiler and enter the virtualenv. In case you are not Hailo customer please contact hailo.ai
  • Clone the Hailo Model Zoo
git clone https://github.com/hailo-ai/hailo_model_zoo.git
  • Run the setup script
cd hailo_model_zoo; pip install -e .
  • Run the Hailo Model Zoo. For example, to parse the YOLOv3 model:
python hailo_model_zoo/main.py parse yolov3

Getting Started

For further functionality please see the GETTING_STARTED page (full install instructions and usage examples). The Hailo Model Zoo is using the Hailo Dataflow Compiler for parsing, quantization, emulation and compilation of the deep learning models. Full functionality includes:

  • Parse: model translation of the input model into Hailo's internal representation.
  • Profiler: generate profiler report of the model. The report contains information about your model and expected performance on the Hailo hardware.
  • Quantize: numeric translation of the input model into a compressed integer representation.
  • Compile: run the Hailo compiler to generate the Hailo Executable Format file (HEF) which can be executed on the Hailo hardware.
  • Evaluate: infer the model using the Hailo Emulator or the Hailo hardware and produce the model accuracy.

For further information about the Hailo Dataflow Compiler please contact hailo.ai.

Models

Full list of pre-trained models can be found here.

License

The Hailo Model Zoo is released under the MIT license. Please see the LICENSE file for more information.

Contact

Please visit hailo.ai for support / requests / issues.

Comments
  • yolov7.hef vs yolov5m_wo_spp_60p.hef

    yolov7.hef vs yolov5m_wo_spp_60p.hef

    Hi,

    As far as I know yolov7 faster and more accurate than the yolov5.

    In our tests :

    gst-launch-1.0 rtspsrc location=rtsp://xxxxx/ISAPI/Streaming/Channels/101 name=src_0 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailonet hef-path=/local/shared_with_docker/yolov7.hef is-active=true batch-size=1 ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter function-name=yolov5 so-path=/local/workspace/tappas/apps/gstreamer/libs/post_processes//libyolo_post.so config-path=/local/workspace/tappas/apps/gstreamer/general/detection/resources/configs/yolov5.json qos=false ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailooverlay ! videoconvert ! fpsdisplaysink video-sink=xvimagesink name=hailo_display sync=false text-overlay=false -v | grep -e hailo_display -e hailodevicestats

    yolov7.hef almost 7 times slower than the yolov5m_wo_spp_60p.hef version.

    opened by MyraBaba 19
  • Error: Model uses too many reources: 136 Layer-Controllers

    Error: Model uses too many reources: 136 Layer-Controllers

    onnx: 1.11.0
    torch: 1.12.1
    torchvision: 0.13.1
    

    Hi, I have fine-tuned yolov5m_wo_spp.pt model in the yolov5 v6.2 framework. Then I have exported the model to onnx (with opset 11) also in the yolov5 v6.2 framework. When I compile this onnx model with hailomz compile, the model optimization is done correctly, but then it throws following error:

    997/1000 [============================>.] - ETA: 1s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74 998/1000 [============================>.] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74 999/1000 [============================>.] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv741000/1000 [==============================] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv741000/1000 [==============================] - 477s 477ms/step - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74: 0.0297
    [info] Fine Tune is done (completion time is 00:38:18.70)
    Calibration: 64entries [00:48,  1.32entries/s]
    
    [info] Model Optimization is done
    [info] Loading model script on yolov5m_wo_spp
    [info] Loading network parameters
    [info] Starting Hailo allocation and compilation flow
    [info] Using Single-context flow
    [info] Resources optimization guidelines: Strategy -> GREEDY Objective -> REQUIRED_FPS
    [info] Resources optimization params: max_control_utilization=120%, max_compute_utilization=100%, max_memory_utilization (weights)=100%, max_input_aligner_utilization=100%, max_apu_utilization=100%
    [info] Running Auto-Merger
    [info] Auto-Merger is done
    [info] Adding a portal between conv27( index=19 604, name=conv27, ) and concat7, type: L4
    [info] Starting context partition
    [info] Context partition is done (0s 2ms)
    [info] Adding format conversion layer 'auto_reshape_from_input_layer1_to_merged_layer_normalization1_space_to_depth1' after input_layer1
    [info] Adding format conversion layer 'auto_reshape_from_conv74_to_output_layer1' after conv74
    [info] Adding format conversion layer 'auto_reshape_from_conv84_to_output_layer2' after conv84
    [info] Adding format conversion layer 'auto_reshape_from_conv93_to_output_layer3' after conv93
    Model uses too many reources: 136 Layer-Controllers
    [critical] Model uses too many reources: 136 Layer-Controllers
    [error] Failed to produce compiled graph
    [error] Tried to deserialize allocator result on failure, but got another exception: No output graph, deserialization failed.
    Traceback (most recent call last):
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/bin/hailomz", line 33, in <module>
        sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')())
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main.py", line 181, in main
        run(args)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main.py", line 170, in run
        return handlers[args.command](args)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main_driver.py", line 132, in compile
        compile_model(runner, network_info, args.results_dir, model_script_path=args.model_script_path)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/core/main_utils.py", line 298, in compile_model
        hef = runner.compile()
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
        return func(self, *args, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py", line 661, in compile
        return self._get_hef_hw_representation()
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
        return func(self, *args, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py", line 707, in _get_hef_hw_representation
        serialized_hef = self._sdk_backend.get_hef_hw_representation(fps, allocator_script, mapping_timeout)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1156, in get_hef_hw_representation
        hef, mapped_graph_file = self._get_hef_hw_representation(fps, allocator_script, mapping_timeout)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1151, in _get_hef_hw_representation
        hef, mapped_graph_file, auto_alls = self.hef_full_build(fps, mapping_timeout, model_params, allocator_script)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1128, in hef_full_build
        auto_alls, self._mapped_graph, self._integrated_graph = allocator.create_mapping_and_full_build_hef(
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 568, in create_mapping_and_full_build_hef
        self.call_builder(network_graph_path, output_path, compilation_output_proto=compilation_output_proto,
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 527, in call_builder
        self.run_builder(network_graph_path, output_path, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 394, in run_builder
        raise e.internal_exception("Hailo tools builder failed:", hailo_tools_error=e.hailo_tools_error) from None
    hailo_sdk_client.sdk_backend.sdk_backend_exceptions.BackendAllocatorException: Hailo tools builder failed: Model uses too many reources: 136 Layer-Controllers
    

    If I train the model in the hailo yolov5 retraining docker container, the compilation works fine. Any idea what this error means?

    opened by frenky-strasak 3
  • Is there a specific implementation limit ? (multitasking models, cascaded models, or large models)

    Is there a specific implementation limit ? (multitasking models, cascaded models, or large models)

    Hi, I have not applied through the developer zone account yet (will it be difficult to apply for all-pass?). I wonder if the Haio-8 chip can run some large models at the same time? Or can you tell me what the implementation limits are? E.g:

    1. Operator compatibility (highest opset version supported by ONNX, or better than OpenVINO in general?)
    2. What is the memory size of the chip that can store/compute tensors? Can it run super-resolution models with higher output resolutions? Can the chip run very wide fully connected layers?
    3. Hailo-8 can multitask, if I run keypoint detection, ReID and depth estimation at the same time with frame skipping, is the chip's computing or memory capacity overloaded? How to spot where the overload is or estimate it?
    4. Has your team considered having multiple Hailo-8 chips "chained" to run some difficult tasks? This should be super cool.
    opened by BICHENG 2
  • Dataflow Compiler v3.17.0 not available in Developer Zone

    Dataflow Compiler v3.17.0 not available in Developer Zone

    Hi, in the latest changelog update you mentioned, that the repository was updated to use the Dataflow Compiler v3.17.0. However, in the Developer Zone only version 3.16.0 is available. How can we get the latest Dataflow Compiler v3.17.0? Can you please add it in the Developer Zone?

    opened by kmaerkl 2
  • Old version of yolov5 in retraining docker container.

    Old version of yolov5 in retraining docker container.

    Hi, for what reason does the retraining docker container contain the old version of yolov5 v2.0? Is possible to use some new versions of yolov5 such as v6.2? Are these newer versions of yolov5 compatible with data flow compiler to optimize and compile models to hailo hef files? Thanks!

    opened by frenky-strasak 1
  • YoloV7-tiny with on-chip NMS

    YoloV7-tiny with on-chip NMS

    Dear Hailo, the output structure of Yolov5 and Yolov7 is the same IIRC, so it should be possible to run the NMS on-chip. I wanted to test this, so I took the yolov5xs_wo_spp_nms model of this zoo as a reference. When downloading, I get this NMS config JSON:

    {
      "nms_scores_th": 0.01,
      "nms_iou_th": 1.0,
      "image_dims": [512, 512],
      "max_proposals_per_class": 80,
      "background_removal": false,
      "input_division_factor": 8,
      "classes": 80,
      "bbox_decoders": [
          {
              "name": "bbox_decoder53",
              "w": [
                  10,
                  16,
                  33
              ],
              "h": [
                  13,
                  30,
                  23
              ],
              "stride": 8,
              "encoded_layer": "conv53"
          },
          {
              "name": "bbox_decoder61",
              "w": [
                  30,
                  62,
                  59
              ],
              "h": [
                  61,
                  45,
                  119
              ],
              "stride": 16,
              "encoded_layer": "conv61"
          },
          {
              "name": "bbox_decoder69",
              "w": [
                  116,
                  156,
                  373 
              ],
              "h": [
                  90,
                  198,
                  326
              ],
              "stride": 32,
              "encoded_layer": "conv69"
          }
      ]
    }
    

    I cannot find a description of these parameters in the documentation anywhere. While I understand some parameters, like the names and anchors (w, h, stride, etc.) I dont get these ones:

      "background_removal": false,
      "input_division_factor": 8,
    

    Can you help me with these parameters? And did you ever test a yolov7 and on-chip decode/nms? Furhter, in the yolov5xs alls-file, there have been these settings made:

    buffers(proposal_generator0, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator1, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator2, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator0_concat, nms1, 2)
    

    which I don't really understand. Could you explain the usage of those as well? thanks!

    Cheers

    opened by dnns92 1
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Reactangle hef model does not work

    Reactangle hef model does not work

    Hi, in the yolov5 retraining container (v.2) I have exported yolov5m_wo_spp.pt to ONNX with rectangular shape: python models/export.py --weights model.pt --img 352 640 --batch 1 (--img H W)

    Then I compiled this onnx model: hailomz compile --ckpt model.onnx --calib-path calib_dataset --yaml yolov5m_wo_spp.yaml where I changed the yolov5m_wo_spp.yaml like this:

    • I added the preprocessing part with corresponding shape:
    preprocessing:
      network_type: detection
      input_shape:
      - 352
      - 640
      - 3
    
    • and I changed the info (the output shapes were found by Netron tool)
    info:
      input_shape: 352x640x3
      output_shape: 11x20x18, 22x40x18, 44x80x18
    

    The complete file is here: yolov5m_wo_spp.zip

    The compilation looks fine. When I deploy the hef file to my pipeline I can see wrong bboxes which are doubled and shifted. But they move in the same way as the object to be detected.

    What do I miss? Should I also modify the yolov5m_wo_spp.alls file? Could you point me please? Thank you!

    opened by frenky-strasak 3
  • Fix yolo postprocessing when batches are used

    Fix yolo postprocessing when batches are used

    Hi,

    currently the yolo postprocessing is not working when batches are provided. I fixed the bug in this branch: https://github.com/DavidBecht/hailo_model_zoo

    BR

    opened by DavidBecht 0
  • Illegal instruction (core dumped)

    Illegal instruction (core dumped)

    Hi,

    hailomz gives below error all the time. (in docker)

    all other command and tappas is working without any problem

    hailomz -h Illegal instruction (core dumped)

    opened by MyraBaba 1
  • when I run the hef , there is something wrong . how to compile mode to hef in ARM arch.

    when I run the hef , there is something wrong . how to compile mode to hef in ARM arch.

    [HailoRT] [error] CHECK failed - Failed opening non-compatible HEF with the following unsupported extensions: KO Run ASAP (KO_RUN_ASAP) [HailoRT] [error] CHECK_SUCCESS failed with status=26 [HailoRT] [error] Failed parsing HEF file [HailoRT] [error] Failed creating HEF [HailoRT] [error] CHECK_EXPECTED failed with status=26

    opened by riverfrank 2
  • Post-processing yolov5s personface output

    Post-processing yolov5s personface output

    Hi,

    As a result of inference of the yolov5s_personface model I get 3 vectors of dimensions [1, 40, 40, 21], [1, 20, 20, 21], [1, 80, 80, 21]; what's the correct/fastest procedure to decode them in order to get a list of detections (such as [x_min, y_min, x_max, y_max, score, class])?

    Thanks

    opened by aux82716 6
Releases(v2.5)
Owner
Hailo
Hailo
DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency

[CVPR19] DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency (Oral paper) Authors: Kuang-Jui Hsu, Yen-Yu Lin, Yung-Yu Chuang PDF:

Kuang-Jui Hsu 139 Dec 22, 2022
A PyTorch implementation of SlowFast based on ICCV 2019 paper "SlowFast Networks for Video Recognition"

SlowFast A PyTorch implementation of SlowFast based on ICCV 2019 paper SlowFast Networks for Video Recognition. Requirements Anaconda PyTorch conda in

Hao Ren 8 Dec 23, 2022
This repository contains the code for the ICCV 2019 paper "Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics"

Occupancy Flow This repository contains the code for the project Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics. You can find detail

189 Dec 29, 2022
OpenABC-D: A Large-Scale Dataset For Machine Learning Guided Integrated Circuit Synthesis

OpenABC-D: A Large-Scale Dataset For Machine Learning Guided Integrated Circuit Synthesis Overview OpenABC-D is a large-scale labeled dataset generate

NYU Machine-Learning guided Design Automation (MLDA) 31 Nov 22, 2022
"SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image", Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi, Zhangyang Wang

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image [Paper] [Website] Pipeline Code Environment pip install -r requirements

VITA 250 Jan 05, 2023
PyTorch implementation of Weak-shot Fine-grained Classification via Similarity Transfer

SimTrans-Weak-Shot-Classification This repository contains the official PyTorch implementation of the following paper: Weak-shot Fine-grained Classifi

BCMI 60 Dec 02, 2022
Code for the CVPR2021 workshop paper "Noise Conditional Flow Model for Learning the Super-Resolution Space"

NCSR: Noise Conditional Flow Model for Learning the Super-Resolution Space Official NCSR training PyTorch Code for the CVPR2021 workshop paper "Noise

57 Oct 03, 2022
Codes for TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization.

TS-CAM: Token Semantic Coupled Attention Map for Weakly SupervisedObject Localization This is the official implementaion of paper TS-CAM: Token Semant

vasgaowei 112 Jan 02, 2023
Dense Prediction Transformers

Vision Transformers for Dense Prediction This repository contains code and models for our paper: Vision Transformers for Dense Prediction René Ranftl,

Intelligent Systems Lab Org 1.3k Jan 02, 2023
Computational inteligence project on faces in the wild dataset

Table of Contents The general idea How these scripts work? Loading data Needed modules and global variables Parsing the arrays in dataset Extracting a

tooraj taraz 4 Oct 21, 2022
[CVPR 2022] Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

Using Unreliable Pseudo Labels Official PyTorch implementation of Semi-Supervised Semantic Segmentation Using Unreliable Pseudo Labels, CVPR 2022. Ple

Haochen Wang 268 Dec 24, 2022
BOVText: A Large-Scale, Multidimensional Multilingual Dataset for Video Text Spotting

BOVText: A Large-Scale, Bilingual Open World Dataset for Video Text Spotting Updated on December 10, 2021 (Release all dataset(2021 videos)) Updated o

weijiawu 47 Dec 26, 2022
Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition

Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition Introduction Run attack: SGADV.py Objective function: foolbox/attacks/gradi

1 Jul 18, 2022
Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"

Topographic Variational Autoencoder Paper: https://arxiv.org/abs/2109.01394 Getting Started Install requirements with Anaconda: conda env create -f en

T. Andy Keller 69 Dec 12, 2022
A large-image collection explorer and fast classification tool

IMAX: Interactive Multi-image Analysis eXplorer This is an interactive tool for visualize and classify multiple images at a time. It written in Python

Matias Carrasco Kind 23 Dec 16, 2022
A Pytorch implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU_pytorch A Pytorch Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/ab

Fuhang 36 Dec 24, 2022
[ICML 2021] "Graph Contrastive Learning Automated" by Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang

Graph Contrastive Learning Automated PyTorch implementation for Graph Contrastive Learning Automated [talk] [poster] [appendix] Yuning You, Tianlong C

Shen Lab at Texas A&M University 80 Nov 23, 2022
A facial recognition doorbell system using a Raspberry Pi

Facial Recognition Doorbell This project expands on the person-detecting doorbell system to allow it to identify faces, and announce names accordingly

rydercalmdown 22 Apr 15, 2022
AdamW optimizer for bfloat16 models in pytorch.

Image source AdamW optimizer for bfloat16 models in pytorch. Bfloat16 is currently an optimal tradeoff between range and relative error for deep netwo

Alex Rogozhnikov 8 Nov 20, 2022
Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Ng Kam Woh 71 Dec 22, 2022