Design & Reuse We Chat
关注获取最新IP SOC业界资讯

LeapMind Announces the Beta Release of their Ultra-low Power Consumption AI Inference Accelerator IP

Oct. 05, 2021, Oct. 05, 2021 – 

Tokyo Japan - October 5, 2021 -- LeapMind Inc., a creator of the standard in edge AI (Shibuya-ku, Tokyo; CEO: Soichi Matsuda) today announced the beta release of ultra-low power consumption AI inference accelerator IP “Efficiera” version 2 (v2) before its commercial launch by the end of this year. Efficiera originally developed and licensed by LeapMind Inc. has been highly valued since its launch in October 2020 for its features of power saving, high performance, space saving, and performance scalability which enable easy installation to small FPGA and contribute to shortened development period of end users’ final products by supporting mass-produced boards. Upon the release of v2 beta version, LeapMind is welcoming trial use by and feedback from many people including SoC vendors and end user product designers. To obtain v2 beta version, please contact us at business@leapmind.io.

Main specifications/features of Efficiera v2

“It is a pleasure for us to release beta version while the development of Efficiera v2 is proceeding as scheduled”, commented Katsutoshi Yamazaki, LeapMind’s VP of Business. “Efficiera v2 is a product with a view to the use for ASIC/ASSP as well as FPGA, and its hardware performance enables image processing at the edge by utilizing machine learning. In the market, there is discussion on which is better, cloud processing or edge processing for complicated inference processing that requires large amounts of data, considering various conditions and constraints such as network load, power efficiency and real-time performance. Through our core technology, extremely low bit quantization which has developed under the assumption of inference processing using large volumes of complicated data on edge devices, we aim to help promote device development that allows users to choose inference processing on edge without hesitation. Efficiera v2 has achieved dramatically improved processing capability, especially for image processing, and our development department has confirmed that Efficiera v2 has demonstrated its great capability in situations where high-definition image processing by AI on edge devices is required. We hope that Efficiera v2 will trigger significant expansion of the conventional application scope of machine learning on edge”.

AI Inference Accelerator IP Cores

About Efficiera

Efficiera is an ultra-low power AI inference accelerator IP specialized for CNN inference processing that runs as a circuit on FPGA or ASIC devices. The "ultra-small quantization" technology minimizes the number of quantization bits to 1 - 2 bits, maximizing the power and area efficiency of convolution, which accounts for most of the inference processing, without the need for advanced semiconductor manufacturing processes or special cell libraries. By using this product, deep learning functions can be incorporated into a variety of edge devices, including consumer electronics such as home appliances, industrial equipment such as construction machinery, surveillance cameras, broadcasting equipment, as well as small machines and robots that are constrained by power, cost, and heat dissipation, which has been technically difficult in the past. Visit product website at https://leapmind.io/business/ip/

About LeapMind

LeapMind Inc. was founded in 2012 with the corporate philosophy of "bringing new devices that use machine learning to the world". Total investment in LeapMind to date has reached 4.99 billion yen (as of May 2021). The company's strength is in extremely low bit quantization for compact deep learning solutions. It has a proven track record of achievement with over 150 companies, centered in manufacturing including the automobile industry. It is also developing its Efficiera semiconductor IP, based on its experience in the development of both software and hardware.







© 2023 Design And Reuse


不得复制,重发, 转载或以其他方式使用。