www.design-reuse-china.com
Search Solutions  
OK
40 "AI Processor" SoCs

1
Arm Ethos-N57 NPU
ML Inference Processor with Balanced Efficiency and Performance

2
Cortex-M55
The Arm Cortex-M55 processor is Arm's most AI-capable Cortex-M processor and the first to feature Arm Helium vector processing technology.

3
Cortex-M85
A New Milestone for High-Performance Microcontrollers, Arm Cortex-M85 is the highest performing Cortex-M processor with...

4
Enhanced Neural Processing Unit for safety providing 32,768 MACs/cycle of performance for AI applications
Synopsys ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applications requiring AI enabled SoCs. The ARC NPX6 NPU IP is designed f...

5
Ethos-N78
Highly Scalable and Efficient Second-Generation ML Inference Processor

6
Ethos-U55 Embedded ML Inference for Cortex-M Systems
Unlock the Benefits of AI with this Best-in-Class Solution

7
Ethos-U65
AI Innovation for Edge and Endpoint Devices

8
Ethos-U85

Accelerate Edge AI Innovation

AI data-processing workloads at the edge are already transforming use cases and user experiences. The third-generation Ethos NPU helps meet the needs of future e...


9
Neo NPU - Scalable and Power-Efficient Neural Processing Units
Highly scalable performance for classic and generative on-device and edge AI solutions The Cadence Neo NPUs offer energy-efficient hardware-based AI engines that can be paired with any host processor...

10
AI-Capable 3D GPU
Vivante AI-GPU augments the stunning 3D graphics rendering capabilities of the Vivante 3D engines with a dedicated Neural Network Engine (NN) and Tensor Processing Fabric to support advanced capabili...

11
CEVA NeuPro-M- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
NeuPro-M redefines high-performance AI (Artificial Intelligence) processing for smart edge devices and edge compute with heterogeneous coprocessing, targeting generative and classic AI inferencing wor...

12
CEVA-BX1 Multipurpose DSP/Controller
The CEVA-BX architecture delivers excellent all-round performance for a new generation of smart devices by providing the perfect alternative for special purpose DSPs and MCUs with DSP co-processors that cannot handle the diverse algorithm needs of today s applications.

13
CEVA-BX2 Multipurpose DSP/Controller
CEVA-BX2 is a multipurpose hybrid DSP and Controller, designed for the inherent low power requirements of DSP kernels with high-level programming and compact code size requirements of a large control code base.

14
CMNP - Chips&Media Neural Processor
Chips&Media s CMNP, the new Neural Processing Unit (NPU) product, competes for high-performance neural processing-based IP for edge devices. CMNP provides exceptionally enhanced image quality based on...

15
Edge AI/ML accelerator (NPU)
TinyRaptor is a fully-programmable AI accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. TinyRaptor reduces the inference time and power consumption needed to run ...

16
NeuPro Family of AI Processors
Dedicated low power AI processor family for Deep Learning at the edge. Providing a self-contained, specialized AI processors, scaling in performance for a broad range of end markets including IoT, smartphones, surveillance, automotive, robotics, medical and industrial.

17
Neural Network Processor for Intelligent Vision, Voice, Natural Language Processing
Efficient and Versatile Computer Vision, Image, Voice, Natural Language, Neural Network Processor

18
4-/8-bit mixed-precision NPU IP
OPENEDGES, the world's only total memory system, and AI platform IP solution company, releases the first commercial mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT. When ENLIGHT is used with ot...

19
ENLIGHT Pro - 8/16-bit mixed-precision NPU IP

The state-of-the-art inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and more. ENLIGHT Pro is meticulously engineered to deliv...


20
memBrain

As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform Artificial Intelligence (AI) functi...


21
Complete Neural Processor for Edge AI
Akida first neuromorphic IP available on the market. Inspired by the biological function of neurons and engineered on a digital logic process, the Akida’s event-based spiking neural network (SNN) perf...

22
General Purpose Neural Processing Unit
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (SoC) developers, Quadrics General Purpose Neural Processing Unit (GPNPU)...

23
InferX X1 Edge Inference Co-Processor
The InferX X1 Edge Inference Co-Processor is optimized for what the edge needs: large models and large models at batch=1.

24
NMAX512 : Neural Inferencing Tile for 1 to >100 TOPS
NMAX has a unique new architecture that loads weights rapidly compared to existing solutions.

25
nnMAX 1K AI Inference IP for 2 to >100 TOPS at low power, low die area
nnMAX 1K AI Inference IP for 2 to >100 TOPS at low power, low die area

26
RISC-V-based AI IP development for enhanced training and inference
Tenstorrent develops AI IP with precision, anchored in RISC-V’s open architecture, delivering specialized, silicon-proven solutions for both AI training and inference. Our platforms are optimized for ...

27
Ultra Low Power Edge AI Processor
DMP AI processor IP, ZIA™ DV740, is the ultra low power consumption processor IP for Deep Learning on edge side specialized on inference processing. ZIA™ DV740 enables inference processing on multiple...

28
v-MP6000UDX processor
Deep learning has quickly become a must-have technology to bring new smart sensing and intelligent analysis capabilities to all of our electronics. Whether it s self-driving cars that need to understa...

29
ZIA DV500 Series - Ultra Low Power Consumption Processor IP for Deep Learning
AI inference processor IP, which achieves smaller size and ultra-low power consumption by being optimized for object recognition and scene understanding often used in industrial equipment and automobi...

30
ZIA DV700 Series - Configurable AI inference processor IP
Configurable AI inference processor IP, which can optimize the performance and size and process all data such as images, videos, and sounds on the edge side where real-time property, safety, privacy p...

31
ZIA ISP- Small-size ISP IP ideal for AI camera systems
Small-size ISP (Image Signal Processing) IP ideal for AI camera systems.

32
AI processing engine
AON1010™ belongs to the highly optimized AONVoice™ Neural Network cores for Voice and Audio recognition. This solution is optimized for processing microphone data for applications including voice and ...

33
Artificial Intelligence Cores
The most flexible solution on the market, by giving the user the ability to select the best combination of performance, power and cost.

34
C860 High-performance 32-bit multi-core processor with AI acceleration engine
C860 utilizes a 12-stage superscalar pipeline, with a standard memory management unit, and can run Linux and other operating systems.

35
CortiCore - Neural Processing Engine
Roviero has developed a natively graph computing processor for edge inference. CortiCore architecture provides the solution via its unique instruction set that dramatically reduces the compiler comple...

36
Jotunn - Generative AI Platform

The "Memory Wall" was first conceived as a theory by Wulf and McKee in 1994. It posited that the development of the processing unit (CPU) far outpaced that of the memory. As a result the ...


37
POLYN PPG NASP IP Block
POLYN PPGis a Neuromorphic Analog Signal Processor (NASP) with Direct Analog Input (DAI) for real time edge pulse determination at a fraction of the power consumed by traditional devices.

38
Powerful AI processor
Highly optimized for generative AI applications running Transformer based NN models that enables highly efficient and simple system designs. Ideal companion to custom NN hardware accelerators using hi...

39
Speedster7t FPGAs
Speedster®7t FPGAs are optimized for high-bandwidth workloads and eliminate the performance bottlenecks associated with traditional FPGAs.

40
Spiking Neural Processor

Innatera's ultra-efficient neuromorphic processors mimic the brain's mechanisms for processing sensory data. Based on a proprietary analog-mixed signal computing architecture, Innatera'...


 Back

业务合作

广告发布

访问我们的广告选项

添加产品

供应商免费录入产品信息

© 2023 Design And Reuse

版权所有

本网站的任何部分未经Design&Reuse许可,
不得复制,重发, 转载或以其他方式使用。