Enrollments closing soon for Post Graduate Certificate Program in Applied Data Science & AI By IIT Roorkee | 3 Seats Left

  Apply Now

Optimization and Quantization of Models for better performance

10 / 21

Framework specific optimization parameters

Many common layers exist across known frameworks and neural network topologies. Examples of these layers are Convolution, Pooling, and Activation. To read the original model and produce the Intermediate Representation of a model, the Model Optimizer must be able to work with these layers. Apart from general conversion parameters, there are framework specific conversion parameters. Click on individual framework below to learn more about the specific parameters that should be used with each framework.

• Caffe

• TensorFlow

• MXNet

• ONNX

• Kaldi


Several layers from various frameworks are supported by the Model Optimizer. The full list of them depends on the framework and can be found in the Supported Framework Layers section. If your topology contains only layers from the list of layers, as is the case for the topologies used by most users, the Model Optimizer easily creates the Intermediate Representation. After that you can proceed to work with the Inference Engine. However, if you use a topology with layers that are not recognized by the Model Optimizer out of the box, see Custom Layers in the Model Optimizer to learn how to work with custom layers.

There are many examples available on how to convert models from various supported frameworks. Few of the examples are listed below.


Resources

• Supported Framework Layers

• Custom Layers in the Model Optimizer 


No hints are availble for this assesment

Answer is not availble for this assesment