Inference Engine & integration with Deep Learning applications

1 / 26

Understanding various Intel® AI platforms - CPU, iGPU and VPU to achieve accelerated performance

Intel® AI Compute Portfolio

Artificial intelligence offers extraordinary possibilities for the future of every industry. With AI acceleration and optimization that goes silicon deep and ecosystem wide, Intel offers the flexibility you need to create the deployment you want. From hardware that excels at training massive, unstructured data sets, to extreme low-power silicon for on-device inference, Intel AI supports cloud service providers, enterprises, and research teams with a portfolio of multi-purpose, purpose-built, customizable, and application-specific hardware that turn model into reality. Intel has a portfolio of products for AI computing, from workload-focused CPU and GPU products to programmable components and deep learning accelerators to support diverse customer usages across IoT segments.

CPU

Intel's CPUs are the most flexible processors for most generic AI workloads and are the foundation for deep learning inferencing with AI-enhanced capabilities (e.g. Intel® Deep Learning Boost, Intel® AVX 512) integrated in the silicon.

enter image description here

iGPU

Intel's integrated GPUs with hardware support for AI workloads built-in (e.g. dp4a) enhance the overall CPU goodness for AI workloads.

enter image description here

VPU

Intel® Movidius™ VPUs enable computer vision and edge AI workloads on intelligent cameras and edge servers achieving a balance between power efficiency and compute performance.

enter image description here

FPGA

Intel® FPGAs are better suited for AI inferencing usages as a low-latency solution for safer and interactive experiences.

Intel® RealSense™ Depth and Tracking Cameras

Intel RealSense depth & tracking cameras, modules, and processors give devices the ability to perceive and interact with their surroundings.

Intel® Gaussian & Neural Accelerator (Intel® GNA)

When power and performance are critical, the Intel® GNA provides power-efficient, always-on support. Intel® GNA is designed to process AI speech and audio applications such as neural noise cancellation while simultaneously freeing up CPU resources for overall system performance and responsiveness. The GNA plugin provides a way to run inference on Intel® GNA, as well as in the software execution mode on CPU.

enter image description here


In addition to the Intel hardware foundation, Intel technologies deliver further performance and reliability gains:

• Intel® Distribution of OpenVINO toolkit

• Intel® Math Kernel Library (Intel® MKL): This library optimizes code for future generations of Intel processors with minimal effort. It is compatible with a broad array of compilers, languages, operating systems, and linking and threading models.

• Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN): An open-source, performance-enhancing library for accelerating DL frameworks on Intel hardware.

• Intel® Distribution for Python: Accelerates AI-related Python libraries such as NumPy, SciPy, and scikit-learn* with integrated Intel® Performance Libraries such as Intel MKL for faster AI inferencing.

• Framework optimizations: Intel has worked with Google* on TensorFlow* , with Apache on MXNet* , with Baidu on PaddlePaddle* , and on Caffe* to enhance DL performance using software optimizations for Intel Xeon Scalable processors in the data center, and it continues to add frameworks from other industry leaders.

enter image description here


No hints are availble for this assesment

Answer is not availble for this assesment