Enrollments closing soon for Post Graduate Certificate Program in Applied Data Science & AI By IIT Roorkee | 3 Seats Left
Apply NowLogin using Social Account
     Continue with GoogleLogin using your credentials
The Intel® Distribution of OpenVINO™ toolkit supports neural network model layers in multiple frameworks including TensorFlow* , Caffe* , MXNet* , Kaldi* , and ONNX* . The Model Optimizer searches for each layer of the input model in the list of known layers before building the model's internal representation, optimizing the model, and producing the Intermediate Representation. The list of known layers is different for each of supported frameworks. To see the layers supported by each framework, refer to the supported framework layers. Custom layers are layers that are not included into a list of known layers. If your topology contains any layers that are not in the list of known layers, the Model Optimizer classifies them as custom.
Custom operations guide illustrates and describes the workflow for running inference on topologies featuring custom operations, allowing you to plug in your own implementation for existing or completely new operation.
Model Optimizer extensibility mechanism enables support for new operations and custom transformations to generate the optimized intermediate representation (IR) as described in the Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™ toolkit. This mechanism is a core part of the Model Optimizer. The Model Optimizer itself uses it under the hood, being a huge set of examples on how to add custom logic to support your model.
To understand more on model conversion pipeline refer Model Optimizer Extensibility.
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
No hints are availble for this assesment
Answer is not availble for this assesment