aiMotive
Published in

aiMotive

Exploiting parallelism in NN workloads to realize scalable, high-performance NN acceleration hardware

Written by Tony King-Smith, Product Manager of aiWare

Many automotive system designers, when considering suitable hardware platforms for executing high-performance NNs (Neural Networks) frequently determine the total compute power by simply adding up each NN’s requirements — the total defines the capabilities of the NN accelerator needed. Or does it?

The reality is almost all automotive NN applications comprise a series of smaller NN workloads. By considering the many forms of parallelism inherent in automotive NN inference, a far more flexible approach, using multiple NN acceleration engines, can deliver superior results with far greater
scalability, cost-effectiveness and power efficiency…

Read the full whitepaper here

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
aiMotive Team

aiMotive Team

aiMotive’s 220-strong team develops a suite of technologies to enable AI-based automated driving solutions built to increase road safety around the world.