Enrollments closing soon for Post Graduate Certificate Program in Applied Data Science & AI By IIT Roorkee | 3 Seats Left
Login using Social Account
Login using your credentials
Can Model Optimizer create INT8 model weights ?
Yes
No
Taking you to the next exercise in seconds...
Stay here Next Exercise
Want to create exercises like this yourself? Click here.
Note - Having trouble with the assessment engine? Follow the steps listed here
1 Introduction to Model Optimizer and understand it’s significance
2 Converting and Preparing the Models with Model Optimizer
3 Quiz Question
4 Quiz Question
5 Model conversion overview with Model Optimizer
6 Converting a Model to Intermediate Representation (IR)
7 Understanding the General Conversion Parameters
8 Model Optimizer Demo
9 Lab: Convert a ONNX model and compare size of the model between FP32 and FP16
10 Framework specific optimization parameters
11 Lab: Convert a Tensorflow* model to Intermediate Representation
12 Quiz Question
13 Model optimization techniques and understanding how Model Optimizer works under the hood
14 Choosing the right Quantization option based on the Hardware Platform
15 Quiz Question
16 Understanding the advantages of INT8 quantization
17 Introduction to low precision optimization with Post-Training Optimization Tool(POT)
18 Optimizations - Advanced topics
19 Quiz Question
20 Quiz Question
21 Quiz Question