Enrollments closing soon for Post Graduate Certificate Program in Applied Data Science & AI By IIT Roorkee | 3 Seats Left
Apply NowLogin using Social Account
     Continue with GoogleLogin using your credentials
ONNX* allows AI developers to easily transfer models between different frameworks. Today, PyTorch, Caffe2, Apache MXNet, Microsoft Cognitive Toolkit, and other tools are have ONNX support. Refer to the supported public ONNX* topologies.
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
To convert an ONNX* model:
Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory. Use the mo.py script to simply convert a model with the path to the input model .onnx file:
python3 mo.py --input_model <INPUT_MODEL>.onnx
There are no ONNX* specific parameters, so only framework-agnostic parameters are available to convert your model.
Let's understand how to convert squeezenet public ONNX* to IR format
In this lab, let's convert squeezenet public ONNX* to IR format-
Download the Jupyter* Notebook for this lab from Smart Video Workshop github* repo and follow the below steps to upload it on Intel® Devcloud for the Edge.
• Navigate to Intel® Devcloud for the Edge
• Log in to your account
• From the menu bar-> Select Build
• Select-> Connect and Create
• Click on Upload to upload the Jupyter* Notebook file
• Once the file is uploaded, double click to open it.
• Follow the steps in the Jupyter* Notebook to understand the concepts and complete the lab
• Converting a Model Using General Conversion Parameters
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
No hints are availble for this assesment
Answer is not availble for this assesment