openvino API 2.0 与之前版本API差异对比
                        程序开发
                        2023-09-23 06:22:58 
                    
                    🥇 版权: 本文由【墨理学AI】原创首发、各位读者大大、敬请查阅、感谢三连
🎉 声明: 作为全网 AI 领域 干货最多的博主之一,❤️ 不负光阴不负卿 ❤️

文章目录
0. python 项目 openvino 的安装
使用 pip 命令进行安装即可,下载安装速度慢、则考虑指定源
pip install  openvino-dev
 pip install  openvino
 # 模型转换会用到 
 pip install  onnx
 pip install onnxruntime
  
 openvino API 2.0 与之前版本API差异对比
openvino API 2.0 和之前 版本差异较大,本博文属于官方文档的Python版本代码摘录,方便部分官网网络不可达的同学查阅,有个基本了解
1. Create Core
Inference Engine API:
import numpy as np
 import openvino.inference_engine as ie
 core = ie.IECore()
  
 
OpenVINO™ Runtime API 2.0:
import openvino.runtime as ov
 core = ov.Core()
  
 1.1 (Optional) Load extensions (扩展算子)
Inference Engine API:
core.add_extension("path_to_extension_library.so", "CPU")
  
 
OpenVINO™ Runtime API 2.0:
core.add_extension("path_to_extension_library.so")
  
 2. Read a model from a drive
Inference Engine API:
network = core.read_network("model.xml")
  
 
OpenVINO™ Runtime API 2.0:
model = core.read_model("model.xml")
  
 2.1 (Optional) Perform model preprocessing
3. Load the Model to the Device
Inference Engine API:
# Load network to the device and create infer requests
 exec_network = core.load_network(network, "CPU", num_requests=4)
  
 
OpenVINO™ Runtime API 2.0:
compiled_model = core.compile_model(model, "CPU")
  
 4. Create an Inference Request
Inference Engine API:
# Done in the previous step
  
 
OpenVINO™ Runtime API 2.0:
infer_request = compiled_model.create_infer_request()
  
 5. Fill input tensors

6. Start Inference—同步和异步推理

7. Process the Inference Results

精选专栏
计算机视觉领域 八大专栏、不少干货、有兴趣可了解一下

标签:
                            上一篇:
                             Cesium中鼠标事件绑定和移除
                                                        下一篇:
                                                    
                        相关文章
- 
                    			无相关信息                            
 
