www.design-reuse-china.com
搜索,选择,比较,与提供商进行安全高效的联系
Design & Reuse We Chat
D&R中国官方微信公众号,
关注获取最新IP SOC业界资讯

What's the best IP for machine learning workloads – CPU, GPU or NPU?

community.arm.com, Jul. 30, 2019 – 

At Arm we're often asked by partners, developers and other interested parties within the complex and huge machine learning (ML) ecosystem which processors are best at performing specific ML actions on different devices. As described in this Arm white paper, the CPU is still the most common denominator for ML experiences from edge to cloud. It remains central to all ML systems, whether through handling ML tasks entirely on its own or partnering with other processors, such as GPUs or NPUs. However, the IP implementations for ML workloads will vary based on the ML requirements of the device, the use case and the specific workloads.

There is no 'one-size fits-all' ML solution out there, with plenty of different versions and deployment choices. In this blog, I'll take you through a choice selection of ML use cases on devices from face unlock on smartphones and PCs to content recommendations on Smart TVs and which processors – CPU, GPU or NPU – typically carry out the different ML workloads during these use cases.

The CPU, GPU and NPU

Before delving into some of the use cases, it's first worth taking a general overview of the advantages of ML compute on the CPU, GPU and NPU. As the CPU sits at the center of the compute system, it has the flexibility to run any type of ML workload, often being used as the first-choice ML processor for mobile computing. Although the GPU's primary function is graphics processing, its parallel data processing capability makes it suitable for running ML workloads. Finally, the NPU is for specialized, hyper-efficient and highly task-specific ML compute.

Click here to read more...

 Back

业务合作

广告发布

访问我们的广告选项

添加产品

供应商免费录入产品信息

© 2023 Design And Reuse

版权所有

本网站的任何部分未经Design&Reuse许可,
不得复制,重发, 转载或以其他方式使用。