当前位置:网站首页>[seventh issue of notebook series] download and use of openvino pre training model
[seventh issue of notebook series] download and use of openvino pre training model
2022-07-19 04:09:00 【Intel edge computing community】
Before you know it ,Notebook The series has been with you for six periods , In these six issues ,Nono I learned semantic segmentation with my friends 、 Classification detection deployment 、 Text detection and other courses , It can be said that I have gained a lot .
Twinkling of an eye ,Ethan The teacher came again with the seventh course , Long way to dry goods ,Nono Will continue to search up and down , Guys, hurry up and Nono Let's learn together !

Open Model Zoo
About this course
The objectives of this course : be based on OpenVINO Of Open Model Zoo Download the transformation and evaluate one OpenVINO Pre training model of .
Developers should all know , stay OpenVINO There is a very important component called Open Model Zoo. stay Open Model Zoo in , Intel provides you with a wealth of pre training models , This includes Intel's first-party pre training model , It also includes externally exposed pre training models , These pre training models cover the field of vision 、 Different application scenarios in the field of natural language processing . We can use Intel OpenVINO Some model testing and downloading tools in , Download the model , And carry out further verification and application deployment .

Next , Let's pass a notebook Let's see how to get through Open Model Zoo Download and apply the model .
Open Model Zoo
Model download and tool learning
01 Model download and transformation
open notebook, You can see the name of this chapter is Working with Open Model Zoo Models. in other words , We need some tool components to complete the whole Open Model Zoo The use and learning of tools in .

First , You can see the name of the first tool is Model Downloader,Model Downloader As the name suggests, we can pass omz_downloader Command to download OpenVINO Pre training model . Because many pre training models come from third parties , And it is stored in a third-party supported format , So we need to pass Model Converter Convert these third-party model formats into OpenVINO Supported by IR Intermediate expression of format .
02 Model parameters and configuration
When we download and convert the model , Can pass Info Dumper command , Query the basic parameters and information of the model . In addition to Benchmark Tool in , We preset a wealth of hardware and software configuration parameters , You can adjust different configuration parameters , To find the most suitable configuration mode and configuration mode for your application .
Open Model Zoo
The operation sample
01 Model download

First, we need to define the name of a model , What we use here is from pytorch Of mobilenet-v2 Such a classification task , Then load the corresponding dependency Library , Specify the path where we need to save the model later and the model cash The address of , And the accuracy of the final model .

Because it's in python Under the environment of , So we'll pass python The script here simulates the command input in the command line , Just input omz_downloader Such a line of command , Then specify the name of the model 、 Path of local storage , And the path of the starting address , You can easily complete the whole download of the model .

When the download is complete , We will see one log Information . This model is stored in a local called open_model_zoo_ models Under such a path . Because it comes from a third-party model library , So we will add another subdirectory , Show it as public.
02 Model transformation
Next , We need to pass model converter, Transform the model , It's very easy to use , Is to specify the model name , Then specify the precision 、 Its local path , And finally we want to output IR The path of the format .
Actually we model converter Tools , Will be model optimizer The tool is encapsulated , So you can see that the final output is actually and model optimize The output of is similar , Finally, we successfully transformed this model into version 11 Model version of .
03 Understand model parameters
When we finish the model transformation , You can take a look at the basic parameters of the model . adopt info dumer This line of command , We can know the name of the model 、 Its basic description 、 Its primitive framework、 its license precision , And its catalogue ( The position of ), And its task attributes .
Next let's run info dumper command , You can see that the basic information of our whole model is dump down .

First of all framework, It comes from pytorch Including its related configuration information , Including our target accuracy_config Configuration file for model accuracy verification , We can go through yml file , Through us OpenVINO Medium accuracy-check, Further verify the accuracy of the model . Besides , Including the current accuracy of the model 、 Its quantifiable accuracy 、 Its directory 、 Basic input information of the model , Including its input layer layer Name 、 its input_shape As well as its layout wait .
Next, we can use this information to further preprocess the input data , To match the model input .
04 Model input matches
adopt benchmark tool You can further understand the performance of the model on the current hardware platform of the current application , So we need to run benchmark tool.
Here we simulated a command line instruction , The first is benchmark_app Command name for , Then there is the input path of our model , And the time we need to run , This time we will run 15 Second , stay 15 In seconds , We try to iterate the reasoning as many times as possible , Or iterative reasoning requests , Try to get a relatively average parameter value .
When benchmark_app After completion of operation , We will get some precision performance , The above is mainly part log Information , Like running stream situation , stream A model is a combination of several CPU Of core To perform the , Then some of the model input Information about , We use some random values as the input of our model to load , Its first reasoning time is 7.55 millisecond , Iterative 3000 Multiple rounds , when 15 millisecond , It's also what we just predefined .

Besides, there is our latency, That is, the average value of delayed performance 、 The median 、 Min and Max , Finally, the throughput of the whole model .
05 Configure other parameters
In addition to specifying the model path and its runtime , We have many other parameters that can be configured .

Let's give two examples , The first example is through -d command , We can specify which hardware platform this model runs on . In addition, we can also pass -api Parameter to specify its operation mode , Here we will specify that it can be in asynchronous mode or synchronous mode , Asynchronous mode can achieve better throughput performance , Synchronous mode can get better first inference delay .
In addition, we can also set its batch size, because GPU It is a device with strong parallel ability , So we're configuring GPU You'd better put batch size Set it up a little bigger , To obtain better throughput performance . Of course , There are many more parameters that can be configured , You can go through benchmark_app--help Go and find out what information we can configure .
06 benchmark Instruction encapsulation and test call
Next , We can benchmark Instructions are encapsulated into a python The function interface of , Use this function interface to make further calls to the following tests .

Let's take a look at this platform ( Yes ) What hardware resources , Next, run again CPU The speed on the , We set its parameters to CPU, Because the overall performance of the instructions here should be consistent with the instructions we ran before , It is also running 15 second , It's also in CPU On the implementation , So we can see that its overall performance is similar .

In fact, we have different instructions for this kind of operation , Including passage auto Interface to let our task in CPU and GPU Switch back and forth , Maximize the load .
Besides , We can specify in GPU Up operation , Or let the command run in parallel at the same time CPU and GPU On . Because the whole test process will take a long time , So if you're interested , You can pay attention to running these instructions , Take a look at them on the basis of different hardware deployments , What kind of performance can we get .
Have you all learned ?
Any questions about learning
Please feel free to send a private message to Nono~
--END--
边栏推荐
- Welcome to Hensen_ Blog directory of (full site navigation)
- 无心剑汉英双语诗005.《抒怀》
- [super cloud terminal to create a leading opportunity] local computing cloud management, Intel helps digitalize Education
- 如何更有效的过滤病毒/垃圾邮件!
- Hcip day 6 notes
- Buddy: initialize memory domain
- 小程序毕设作品之微信电子书阅读小程序毕业设计(4)开题报告
- Chapter 2 performance platform godeye source code analysis - data module
- Hcip seventh day notes
- 【数据库】期末必知必会-----第一章 数据库概述
猜你喜欢

Some problems after xcode11 new project

小程序毕设作品之微信电子书阅读小程序毕业设计(1)开发概要

Common functions of JMeter - parametric introduction

无心剑汉英双语诗005.《抒怀》

库函数的模拟实现

Unity - how to modify a package or localize it

Unity - 如何修改一个 Package 或是如何将 Package Local化
![[MySQL] install and configure MySQL on the ECS and connect with idea](/img/27/75b4c818941509fc935f35e617eeee.png)
[MySQL] install and configure MySQL on the ECS and connect with idea

A Tutorial on Learned Multi-dimensional Indexes

C语言详解系列——循环语句的练习与巩固,二分查找的讲解
随机推荐
巧用企业网盘收取报告或总结
Multivariate statistical analysis principal component analysis - 01
Chapter 2 performance platform godeye source code analysis - data module
PAC Decade: witness HPC from CPU era to XPU Era
H5 embedded app, how to communicate with the web? H5 and web communication
厲害,竟然把VSCode玩成了IDEA的效果,有點哇塞
A Tutorial on Learned Multi-dimensional Indexes
Accumulation of natural language processing knowledge points
Laradock restart MySQL found
Workload-Aware Performance Tuning for Autonomous DBMSs
Laravel's file upload
【数据库】期末必知必会-----第九章 数据库设计
Tutorial: Adaptive Replication and Partitioning in Data Systems
[database] must know and be able at the end of the term ----- Chapter VIII database security
英特尔助力开立医疗推动超声产检智能化
高性能与经济性兼备:傲腾 持久内存助力移动云应对严苛内存挑战
可省近90%服务器,反欺诈效率却大增,PayPal打破「AI内存墙」的方案为何如此划算?
【Notebook系列第七期】OpenVINO预训练模型的的下载和使用方法
Chapter 4 performance platform godeye source code analysis - monitoring module
关于数据库的问题,唯一和非重复的概念