PyTorch
Published in

PyTorch

NXP Semiconductors Releases Glow Memory Analysis App Note

Glow is a compiler that interfaces to the PyTorch machine learning framework.

The recent addition of Glow compiler support within NXP’s eIQ™ machine learning (ML) software development platform has been well received by the ML development community. Glow, which is a compiler that interfaces to the PyTorch machine learning framework. The Glow ML compiler is designed to help developers optimize neural network graphs and generate code for target hardware devices such as low-power MCUs. The generated code can then be integrated into a high-level framework such as an NXP MCUXpresso Software Development Kit (SDK) project.

Last fall we explored how developers can use Glow for inferencing neural network models on low-power NXP MCUs. We’ve now released a new application note providing a detailed analysis of Glow memory usage.

The AN13001 Glow Memory Analysis app note explains how to calculate the minimum memory size needed to run an ML model using a generated Glow bundle. The MCU and external memory requirements can then be known ahead of time for a particular model. The app note also explores the impact of quantization on memory usage as well as the impact of reading weight data from flash memory versus RAM.

Because Glow uses statically allocated memory, the memory requirements for a particular ML model can be determined in advance after compiling a model with Glow. This capability enables developers to identify the right size target devices for their ML models ahead of time.

The app note provides real-world examples of how different Glow compile options, such as quantization and using CMSIS-NN and HiFi4 extensions, impact MCU memory usage and describes the trade-offs those compile options may have on inference time. The app note also explains how the location of weight data can impact performance and memory usage and provides examples of that trade-off to help developers optimize a model for a particular embedded system.

Glow is an excellent option for compiling and running ML models on embedded systems. We’re adding powerful new features to the next release of NXP’s eIQ machine learning software in Q1 2021. Stay tuned for more information and exciting new ML software developments.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store