Inference Engine & integration with Deep Learning applications

17 / 26

Lab - Benchmark and compare performance

In this lab exercise on Intel® DevCloud for the Edge, you will learn to use the benchmarking tool to evaluate the performance of your model's synchronous and asynchronous inference. This uses the OpenVINO benchmarking app to benchmark your model on different hardware. Benchmark python tool provides estimation of deep learning inference performance on the supported devices. Performance can be measured for two inference modes: synchronous (latency-oriented) and asynchronous (throughput-oriented).

This tutorial benchmarks the deep learning model with-

1. Different hardware
2. Workload distribution with Multi plugin

To get started, follow the below steps:

  • Go to Intel® DevCloud for the Edge and login using your Intel account.

  • Navigate to Home page --> Learn -->Get Started --> Tutorials --> Benchmark App

  • Follow the steps mentioned in the Jupyter* Notebook to understand the concepts and complete this lab.


No hints are availble for this assesment

Answer is not availble for this assesment