Streamline AI Application Development with Deep Learning Workbench

15 / 18

Lab - DL Workbench

Congratulation on completing the lessons of the this module. Now let us apply the concepts in a build following project. Please complete below task:

Pre-requisites:

• Local installation of DL workbench or

• DL Workbench on Intel® DevCloud for the Edge

• Lessons from this module and previous modules

Tasks:

  1. Create a project with following configuration

    a. Model: yolo-v3-tf form open model zoo with FP16 precision
    
    b. Environment: GPU
    
    c. Dataset: Create dataset of 30 images with data augmentation options available in DL workbench
    
  2. Run Single inference with batch size = 2 and number of parallel streams = 4 and compare the performance and visualize the performance metrics

  3. Create another project with different dataset. Following is the configuration.

    a. Model: yolo-v3-tf form open model zoo with FP32 precision
    
    b. Environment: CPU
    
    c. Dataset: Import COCO validation dataset with annotations
    
  4. Check the accuracy of FP32 model

  5. Run Group Inference with following configuration and compare the performance and visualize the performance metrics.

    a. Batch Size =2, 4, 8
    
    b. Streams: 2, 4, 8
    
  6. Find the best performance configuration in Throughput vs Latency graph for your model assuming a latency threshold requirement.

  7. Optimize the model to INT8 with Default quantization method and check the accuracy

  8. Compare the Accuracy, Latency and Throughput values for INT8 and FP32 precision

  9. Create a deployment package for with optimal model configuration for following target environment

    a. OS: Ubuntu 18.04
    
    b. Target: CPU
    
    c. Include Model: Yes
    
    d. Include Python API: Yes
    
    e. Include Install Script: Yes
    
  10. Download the zip file, analyze the content and deploy the package on the target machine.


No hints are availble for this assesment

Answer is not availble for this assesment