This course will guide you:
• To develop Deep Learning applications seamlessly.
• Optimize and improve performance with and without external accelerators
• Run high-performance inference with a “Write Once and Deploy Anywhere” approach.
• Deploy your same application across combinations of host processors, accelerators, and environments, including CPUs, GPUs, VPUs, on-premise and on-device, and in the browser or in the cloud.
• Intel® will cover advanced features and tools provided by the OpenVINO™ toolkit. You will learn how to utilize those tools to help you identify the best inference configuration for your needs.
• You will develop, test, and run your AI solution on a cluster of the latest Intel® hardware and software with Intel® DevCloud for the Edge.
• This course is focused on developing deep learning inference applications and not model training. The OpenVINO™ toolkit provides a set of pre-trained models that you can use for learning and demo purposes or for developing deep learning software.