www.design-reuse-china.com
Search Solutions  
OK
121 "Artificial Intelligence" SoCs

1
ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications

DesignWare EV Embedded Vision Processors provide high-performance processing capabilities at a power and cost point low enough for embedded applications, while maintaining flexibility to support an...


2
Enhanced Neural Processing Unit for safety providing 32,768 MACs/cycle of performance for AI applications
Synopsys ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applications requiring AI enabled SoCs. The ARC NPX6 NPU IP is designed f...

3
EV74 processor IP for AI vision applications with 4 vector processing units
The DesignWare® ARC® EV71, EV72, and EV74 Embedded Vision Processor IP provides high performance, low power, area efficient solutions for a standalone computer vision and/or AI algorithms engine or as...

4
EV7x Vision Processors

The Synopsys EV7x Vision Processors' heterogeneous architecture integrates vector DSP, vector FPU, and a neural network accelerator to provide a scalable solution for a wide range of current an...


5
EV7xFS Vision Processors for Functional Safety

The ASIL B or D Ready Synopsys EV7xFS Embedded Vision Processors enable automotive system-on-chip (SoC) designers to accelerate Advanced Driver Assistance Systems (ADAS) and autonomous vehicle appl...


6
HBM3 PHY for AI and machine learning model training

The Rambus High-Bandwidth Memory generation 3 (HBM3) PHY is optimized for systems that require a high-bandwidth, low-latency memory solution. The memory subsystem PHY supports data rates up to 8.4 ...


7
Neo NPU - Scalable and Power-Efficient Neural Processing Units
Highly scalable performance for classic and generative on-device and edge AI solutions The Cadence Neo NPUs offer energy-efficient hardware-based AI engines that can be paired with any host processor...

8
NeuroWeave SDK - Faster Product Development for the Evolving AI Market
A common AI software solution for faster product development Developing an agile software stack is important for successful artificial intelligence and machine learning (AI/ML) deployment at the edge...

9
Smart Data Acceleration
Rambus Smart Data Acceleration (SDA) research program is focused on tackling some of the major issues facing data centers and servers in the age of Big Data. The SDA Research Program has been explorin...

10
Tensilica AI Max - NNA 110 Single Core
Single-core neural network accelerator offering from 0.5 to 4 TOPS Optimized for machine learning inference applications

11
Tensilica Vision 110 DSP
The latest addition to the Vision DSP family, built using 128-bit SIMD and offering up to 0.4TOPs of performance

12
Tensilica Vision 130 DSP
First DSP for embedded vision and AI with millions of units shipped in the market

13
Tensilica Vision 230 DSP
Built on our latest Xtensa NX architecture and offers up to 2.18TOPS of performance

14
Tensilica Vision 240 DSP
Built using 1024-bit SIMD and offering up to 3.84TOPS of performance

15
AI IP for Cybersecurity monitoring - Smart Monitor
In cryptography, an attack can be performed by injecting one or several faults into a device, thus, disrupting its functional behavior. Commonly used techniques to inject faults consists of introducin...

16
AI-Capable 3D GPU
Vivante AI-GPU augments the stunning 3D graphics rendering capabilities of the Vivante 3D engines with a dedicated Neural Network Engine (NN) and Tensor Processing Fabric to support advanced capabili...

17
Arm Ethos-N57 NPU
ML Inference Processor with Balanced Efficiency and Performance

18
CDNN Deep Learning Compiler
The CEVA Deep Neural Network (CDNN) is a comprehensive compiler technology that creates fully-optimized runtime software for CEVA-XM Vision DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing.

19
CEVA NeuPro-M- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
NeuPro-M redefines high-performance AI (Artificial Intelligence) processing for smart edge devices and edge compute with heterogeneous coprocessing, targeting generative and classic AI inferencing wor...

20
CEVA NeuPro-S - Edge AI Processor Architecture for Imaging & Computer Vision
NeuPro-S™ is a low power AI processor architecture for on-device deep learning inferencing, imaging and computer vision workloads.

21
CEVA-BX1 Multipurpose DSP/Controller
The CEVA-BX architecture delivers excellent all-round performance for a new generation of smart devices by providing the perfect alternative for special purpose DSPs and MCUs with DSP co-processors that cannot handle the diverse algorithm needs of today s applications.

22
CEVA-BX2 Multipurpose DSP/Controller
CEVA-BX2 is a multipurpose hybrid DSP and Controller, designed for the inherent low power requirements of DSP kernels with high-level programming and compact code size requirements of a large control code base.

23
Cortex-M55
The Arm Cortex-M55 processor is Arm's most AI-capable Cortex-M processor and the first to feature Arm Helium vector processing technology.

24
Cortex-M85
A New Milestone for High-Performance Microcontrollers, Arm Cortex-M85 is the highest performing Cortex-M processor with...

25
Edge AI/ML accelerator (NPU)
TinyRaptor is a fully-programmable AI accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. TinyRaptor reduces the inference time and power consumption needed to run ...

26
Ethos-N78
Highly Scalable and Efficient Second-Generation ML Inference Processor

27
Ethos-U55 Embedded ML Inference for Cortex-M Systems
Unlock the Benefits of AI with this Best-in-Class Solution

28
Ethos-U65
AI Innovation for Edge and Endpoint Devices

29
Ethos-U85

Accelerate Edge AI Innovation

AI data-processing workloads at the edge are already transforming use cases and user experiences. The third-generation Ethos NPU helps meet the needs of future e...


30
Maestro AI
Intelligent Clock Networking Solutions that Adapt on the Fly

31
NeuPro Family of AI Processors
Dedicated low power AI processor family for Deep Learning at the edge. Providing a self-contained, specialized AI processors, scaling in performance for a broad range of end markets including IoT, smartphones, surveillance, automotive, robotics, medical and industrial.

32
Neural Network Processor for Intelligent Vision, Voice, Natural Language Processing
Efficient and Versatile Computer Vision, Image, Voice, Natural Language, Neural Network Processor

33
Vivante VIP8000
The Vivante VIP8000 consists of a highly multi-threaded Parallel Processing Unit, Neural Network Unit and Universal Storage Cache Unit.

34
4-/8-bit mixed-precision NPU IP
OPENEDGES, the world's only total memory system, and AI platform IP solution company, releases the first commercial mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT. When ENLIGHT is used with ot...

35
4-/8-bit mixed-precision NPU IP
OPENEDGES, the world's only total memory system, and AI platform IP solution company, releases the first commercial mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT. When ENLIGHT is used with ot...

36
AI Accelerator IP- ENLIGHT
OPENEDGES™ Artificial Intelligence Compute Engine ENLIGHT™ is a deep learning accelerator IP technology delivers unrivaled compute density and energy efficiency. ENLIGHT™ NPU IP ...

37
c.WAVE100 - Deep Learning based Fully Hardwired Object Detection IP
Chips&Media s Computer Vision IP is Deep Learning based Object Detection with capability to process 4K resolution at 30 FPS input in real time.

38
CMNP - Chips&Media Neural Processor
Chips&Media s CMNP, the new Neural Processing Unit (NPU) product, competes for high-performance neural processing-based IP for edge devices. CMNP provides exceptionally enhanced image quality based on...

39
ENLIGHT Pro - 8/16-bit mixed-precision NPU IP

The state-of-the-art inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and more. ENLIGHT Pro is meticulously engineered to deliv...


40
memBrain

As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform Artificial Intelligence (AI) functi...


 | 
 Previous Page
 | 1 | 2 | 3 | 4 | 
Next Page 
 | 
 Back

业务合作

广告发布

访问我们的广告选项

添加产品

供应商免费录入产品信息

© 2023 Design And Reuse

版权所有

本网站的任何部分未经Design&Reuse许可,
不得复制,重发, 转载或以其他方式使用。