Login using Social Account
     Continue with GoogleLogin using your credentials
Congratulation on completing the lessons of the this module. Now let us apply the concepts in a build following project. Please complete below task:
Pre-requisites:
• Local installation of DL workbench or
• DL Workbench on Intel® DevCloud for the Edge
• Lessons from this module and previous modules
Tasks:
Create a project with following configuration
a. Model: yolo-v3-tf form open model zoo with FP16 precision
b. Environment: GPU
c. Dataset: Create dataset of 30 images with data augmentation options available in DL workbench
Run Single inference with batch size = 2 and number of parallel streams = 4 and compare the performance and visualize the performance metrics
Create another project with different dataset. Following is the configuration.
a. Model: yolo-v3-tf form open model zoo with FP32 precision
b. Environment: CPU
c. Dataset: Import COCO validation dataset with annotations
Check the accuracy of FP32 model
Run Group Inference with following configuration and compare the performance and visualize the performance metrics.
a. Batch Size =2, 4, 8
b. Streams: 2, 4, 8
Find the best performance configuration in Throughput vs Latency graph for your model assuming a latency threshold requirement.
Optimize the model to INT8 with Default quantization method and check the accuracy
Compare the Accuracy, Latency and Throughput values for INT8 and FP32 precision
Create a deployment package for with optimal model configuration for following target environment
a. OS: Ubuntu 18.04
b. Target: CPU
c. Include Model: Yes
d. Include Python API: Yes
e. Include Install Script: Yes
Download the zip file, analyze the content and deploy the package on the target machine.
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
No hints are availble for this assesment
Answer is not availble for this assesment