www.design-reuse-china.com
搜索,选择,比较,与提供商进行安全高效的联系
You are here : design-reuse-china.com  > Artificial Intelligence  > AI Processor

memBrain

All Silicon IP

Overview

As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform Artificial Intelligence (AI) functions - like video and voice recognition. Deep Neural Networks (DNNs) used AI applications require a vast number of Multiply-Accumulate (MAC) operations to generate weight values. These weights then need to be kept in local storage for further processing. This huge amount of data cannot fit into the on-board memory of a stand-alone digital edge processor.

Based on SuperFlash® technology and optimized to manage Vector Matrix Multiplication (VMM) for neural network inference, our memBrain™ neuromorphic memory product improves system architecture implementation of VMM through an analog compute-in-memory approach, enhancing AI inference at the edge. Current neural net models may require 50M or more weights for processing. The memBrain neuromorphic memory product stores synaptic weights inside the floating gate to offer significant system latency improvements such as reducing system bus latencies when fetching from off-chip DRAM. When compared to traditional digital DSP and SRAM/DRAM based approaches, it delivers 10 to 20 times power reduction and significantly lower cost with improved inference frame latency.

业务合作

广告发布

访问我们的广告选项

添加产品

供应商免费录入产品信息

点击此处了解更多关于D&R的隐私政策

© 2023 Design And Reuse

版权所有

本网站的任何部分未经Design&Reuse许可,
不得复制,重发, 转载或以其他方式使用。