![]() |
Industry’s First MCU-based Implementation of Glow Neural Network Compiler for Machine Learning at the Edge | ![]() |
Tuesday, 28. July 2020 15:30 | ||||||||
---|---|---|---|---|---|---|---|---|
EINDHOVEN, The Netherlands, July 28, 2020 (GLOBE NEWSWIRE) -- NXP Semiconductors N.V. (NASDAQ: NXPI) today released its eIQ Machine Learning (ML) software support for Glow neural network (NN) compiler, delivering the industry’s first NN compiler implementation for higher performance with low memory footprint on NXP’s i.MX RT crossover MCUs. As developed by Facebook, Glow can integrate target-specific optimizations, and NXP leveraged this ability using NN operator libraries for Arm Cortex-M cores and the Cadence Tensilica HiFi 4 DSP, maximizing the inferencing performance of its i.MX RT685 and i.MX RT1050 and RT1060. Furthermore, this capability is merged into NXP’s eIQ Machine Learning Software Development Environment, freely available within NXP’s MCUXpresso SDK. Exploiting MCU Architectural Features using Glow “The standard, out-of-the-box version of Glow from GitHub is device agnostic to give users the flexibility to compile neural network models for basic architectures of interest, including the Arm Cortex-A and Cortex-M cores, as well as RISC-V architectures,” said Dwarak Rajagopal, Software Engineering Manager at Facebook. “By using purpose-built software libraries that exploit the compute elements of their MCUs and delivering a 2-3x performance increase, NXP has demonstrated the wide-ranging benefits of using the Glow NN compiler for machine learning applications, from high-end cloud-based machines to low-cost embedded platforms.” Optimized Machine Learning Frameworks for Competitive Advantage “NXP is driving the enablement of machine learning capabilities on edge devices, leveraging the robust capabilities of our highly integrated i.MX application processors and high performance i.MX RT crossover MCUs with our eIQ ML software framework,” said Ron Martino, senior vice president and general manager, NXP Semiconductors. “The addition of Glow support for our i.MX RT series of crossover MCUs allows our customers to compile deep neural network models and give their applications a competitive advantage.” NXP’s edge intelligence environment solution for ML is a comprehensive toolkit that provides the building blocks that developers need to efficiently implement ML in edge devices. With the merging of Glow into eIQ software, ML developers will now have a comprehensive, high-performance framework that is scalable across NXP’s edge processing solutions that include the i.MX RT crossover MCUs and i.MX 8 application processors. Customers will be better equipped to develop ML voice applications, object recognition and facial recognition, among other applications, on i.MX RT MCUs and i.MX application processors. Accelerated Performance with NXP’s Glow Neural Network Implementation NXP’s enablement for Glow is tightly coupled with the Neural Network Library (NNLib) that Cadence provides for its Tensilica HiFi 4 DSP delivering 4.8GMACs of performance. In the same CIFAR-10 example, NXP implementation of Glow achieves a 25x performance advantage by using this DSP to accelerate the NN operations. “The Tensilica HiFi 4 DSP was originally integrated in the i.MX RT600 crossover MCU to accelerate a broad range of audio and voice processing applications. However, as the number of ML inference applications targeting low-cost, low-power MCU-class applications has increased, the inherent DSP computational performance of the HiFi 4 DSP makes it an ideal target to accelerate these NN models,” said Sanjive Agarwala, corporate VP, Tensilica IP at Cadence. “Through NXP’s Glow implementation in eIQ ML software, customers of i.MX RT600 MCUs can leverage the DSP to address a number of ML applications including keyword spotting (KWS), voice recognition, noise reduction and anomaly detection.” “NXP’s inclusion of the Arm CMSIS-NN software library in elQ is designed to maximize the performance and minimize the memory footprint of neural networks on Arm Cortex-M cores,” said Dennis Laudick, VP Marketing, Machine Learning at Arm. “Using a CIFAR-10 neural network model as an example, NXP is able to achieve a 1.8x performance advantage with CMSIS-NN. Other NN models should yield similar results, clearly demonstrating the benefits of this advanced compiler and our optimized NN operator library.” Availability About the i.MX RT Series of Crossover MCUs For more information, go to www.nxp.com/eiq and www.nxp.com/eiq/glow About NXP Semiconductors For more information, please contact:
NXP-IoT A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/7a13fd76-8c48-43aa-91a8-e5cf6ceffa10 |
||||||||
Related Links: NXP Semiconductors N.V. | ||||||||
Author: Copyright GlobeNewswire, Inc. 2016. All rights reserved. You can register yourself on the website to receive press releases directly via e-mail to your own e-mail account. |