Inference Engine & integration with Deep Learning applications

2 / 26

CPU vs. GPU: Making the Most of Both

Today, GPUs run a growing number of workloads, such as deep learning and artificial intelligence (AI). For deep learning training with several neural network layers or on massive sets of certain data, like 2D images, a GPU or other accelerators are ideal.

Deep learning algorithms were adapted to use a GPU accelerated approach, gaining a significant boost in performance and bringing the training of several real-world problems to a feasible and viable range for the first time.

Over time, CPUs and the software libraries that run on them have evolved to become much more capable for deep learning tasks. For example, through extensive software optimizations and the addition of dedicated AI hardware, such as Intel® Deep Learning Boost (Intel® DL Boost) in the latest Intel® Xeon® Scalable processors, CPU-based systems have enjoyed improvements in deep learning performance.

For many applications, such as high-definition-, 3D-, and non-image-based deep learning on language, text, and time-series data, CPUs shine. CPUs can support much larger memory capacities than even the best GPUs can today for complex models or deep learning applications (e.g., 2D image detection). The combination of CPU and GPU, along with sufficient RAM, offers a great testbed for deep learning and AI.


enter image description here


No hints are availble for this assesment

Answer is not availble for this assesment