Enrollments closing soon for Post Graduate Certificate Program in Applied Data Science & AI By IIT Roorkee | 3 Seats Left
Apply NowLogin using Social Account
     Continue with GoogleLogin using your credentials
During the OpenVINO™ toolkit installation, you would have installed the Model Optimizer dependencies. Now let's understand how to convert the model with common optimization parameters.
Use the mo.py script from the <INSTALL_DIR>/deployment_tools/model_optimizer directory to run the Model Optimizer and convert the model to the Intermediate Representation (IR). The simplest way to convert a model is to run mo.py with a path to the input model file:
python3 mo.py --input_model INPUT_MODEL
The mo.py script is the universal entry point that can deduce the framework that has produced the input model by a standard extension of the model file:
.caffemodel - Caffe* models
.pb - TensorFlow* models
.params - MXNet* models
.onnx - ONNX* models
.nnet - Kaldi* models.
If the model files do not have standard extensions, you can use the --framework {tf,caffe,kaldi,onnx,mxnet} option to specify the framework type explicitly.
For example, the following commands are equivalent-
python3 mo.py --input_model /user/models/model.pb
python3 mo.py --framework tf --input_model /user/models/model.pb
Some models require using additional arguments to specify conversion parameters, such as --scale, --scale_values, --mean_values, --mean_file.
To learn more about general model optimizer parameters, refer to Converting a Model Using General Conversion Parameters.
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
No hints are availble for this assesment
Answer is not availble for this assesment