Hardware collaboration goes far beyond hardware

Can hardware accelerators deliver the performance they claim without deep collaboration?

AImotive Team
Oct 8 · 5 min read

Written by Márton Fehér

Unless hardware accelerators are designed with the rest of the system, software, and algorithms in mind, great benchmark results often fail to translate into great production solutions.

Hardware Accelerators make things go faster — right?

Over the years, chip designers have realized that while CPUs can execute almost anything, that comes at a significant price. As a result, accelerators are usually used for well-understood applications such as video, graphics, cryptography, or data compression. By having dedicated hardware designed to execute just one task, substantial increases in performance, reductions in power, and/or reductions in silicon area, can be realized, sometimes delivering orders of magnitude benefits.

If the accelerator is not designed to work with everything else in the system, the claimed benefits are lost.

Since AImotive develops a broad portfolio of automated driving software, hardware, algorithm, and other technologies, we are fortunate to have several complementary skillsets under one roof. We use that know-how to help us ensure that whatever we design is appropriately thought through for real applications, not just benchmarks. And since AImotive came from a graphics and SoC benchmarking background, we understand the difference!

Integration

The simple fact is: no matter how good a NN accelerator is, for a real-time automotive inference application, it must work perfectly with all the other parts of the system. Unless the complete system is engineered as a whole, any one part can destroy the performance of everything else.

An SoC puts incredibly high demands on its memory subsystem

A simple example of this is having a NN accelerator sharing memory with a CPU. A high-performance 64-bit CPU cluster places enormous demands on external DRAM. Studies have shown that up to 50% of CPU performance can be lost — i.e., the CPU is stalled — merely waiting for data from caches. How frustrating to put all that work into designing a 2GHz CPU, only to find it runs no faster than a 1GHz version.

So, imagine what happens when a 20 TOPS NN accelerator shares memory with the same CPU cluster? For example, if the NN engine needs to use the DRAM shared with the CPU for intermediate calculation results, it will consume GBytes/s of additional memory bandwidth. If that slows down the CPU cluster, it slows down the entire system regardless of how fast the NN accelerator might otherwise go.

As a result, it might all look great on paper, but massively underperform in practice. And it only gets more challenging when the CPU cluster is on the same SOC as a GPU cluster — another big memory bandwidth user.

So, dataflow through the entire system and the timing and prioritization of that dataflow is a massive challenge for any systems integrator. That’s why anyone designing an NN accelerator must consider how it integrates with the host CPU and the rest of the hardware systems, or risk causing more problems than it solves.

Bandwidth

Since a hardware accelerator is, by definition, high performance, it can only work as fast as data is fed to it, and the results are used by the rest of the application. This simple fact of life is often missed when hardware designers focus only on making hardware capable of delivering exceptional performance.

This is especially the case for NN (Neural Network) accelerators like aiWare. You need to understand where the data is coming from; where it is going; its burst timing and latency characteristics; how the software interacts with the NN processor, and many other factors. Each of these impacts the accelerators’s ability to do its job in a real application.

The aiWare external NN accelerator concept ensures memory bandwidth is managed so that the SoC can continue to run efficiently for peak NN workloads

That’s why aiWare has dedicated external DRAM for larger configurations. Perhaps more importantly, that’s why it has significantly more on-chip SRAM, distributed between every MAC, to ensure data is always flowing smoothly. Indeed, aiWare has as much as two orders of magnitude more on-chip memory bandwidth compared to other well-respected engines claiming similar TOPS performance. This attention to detail for dataflow both on-chip per clock cycle and off-chip, and how it is shared with the host CPU, is why aiWare can confidently claim up to 95% sustained efficiencies for vision-related workloads. It also ensures that aiWare puts the minimum possible demands on the host CPU and the rest of the hardware system.

Prototype to Embedded

If you believe hardware vendors, it is easy to move a trained NN from a development framework to an embedded platform: just click the button in the SDK, and it’s done!

If only life were that simple. In reality, the task of porting any NN trained in FP32 running on an unconstrained PC or server CPU to a highly constrained embedded SoC with hardware accelerators is an incredibly challenging and nuanced task, no matter how good the hardware platform is. It’s not just about tweaking some parameters: NNs almost always need to be substantially redesigned in order to work well in the limited embedded environment. In these, a host of sensors and other subsystems feed real data to the substantially lower performance CPU, which has far less memory and must align with tough time and power constraints.

AImotive’s aiDrive platform includes software, algorithms and systems design for every aspect of vision-first automated driving — all developed in-house

That’s why we ensure that customers of any aiWare licensee gain access not only to our hardware support team but also our NN research teams, automotive AI software development teams, and test vehicle integration teams’ expertise. We have many years’ experience moving NNs from research into deployment. Our partners’ customers use AImotive to help them port NNs onto their target platform. Because we can bring hardware, software, and NN algorithm engineers together whenever we need to, we can help our customers solve the challenges that make transitioning from prototype to production so tough.

Great hardware is about far more than great hardware engineers.

It only becomes excellent when creative collaboration happens…

AImotive

AImotive develops a suite of automated driving software, simulation and artificial neural network acceleration IP. We work with our international automotive partners to enable AI-based solutions to increase road safety through increasing driving automation.

AImotive Team

Written by

AImotive’s 220-strong team develops a suite of technologies to enable AI-based automated driving solutions built to increase road safety around the world.

AImotive

AImotive

AImotive develops a suite of automated driving software, simulation and artificial neural network acceleration IP. We work with our international automotive partners to enable AI-based solutions to increase road safety through increasing driving automation.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade