NVIDIA’s Xavier system on a chip (SoC) for self-driving cars recently passed TÜV ISO 26262 functional safety testing. Reading NVIDIA’s blog post on this achievement, I was struck by just how many specialized processors Xavier has, many of which were new to me.
Also, did you know there exists a site called Wikichip?
Of course an NVIDIA SoC will have a GPU, in this case a Volta GPU. The Volta GPU on the Xavier is optimized for inference. That means the neural network is probably going to be trained somewhere else and then loaded onto this platform when it’s ready for production deployment.
Wikichip lists this GPU at 22.6 tera-operations per second (TOPS). For comparison, Tesla Motor’s purpose-built self-driving chip boasts 36 TOPS. I confess I don’t know enough about just how far to the redline these chips go to understand whether 23 TOPS vs. 36 TOPS is basically the same thing or wildly different.
Although NVIDIA is a GPU company, the Xavier has a CPU. The CPU has 8 Carmel cores. I assume it’s fast.
Xavier includes a vision processing unit (VPU), which makes sense for a SoC designed for lots of cameras.
NVIDIA sometimes calls this a “Stereo/Optical Flow accelerator.” Optical flow is a machine learning technique for inferring data (distance, velocity) from stereo cameras. I assume more generally the goal is to accelerate machine learning algorithms on sequential frames of video.
I had not before heard of image signal processors. Like a VPU, an ISP is designed to accelerate the performance of algorithms on camera data. ISPs seem to focus on individual high-resolution frames, probably for classification tasks on things like signs.
Vision is clearly a strength of the Xavier. The programmable vision accelerator is an NVIDIA proprietary technology. The best documentation I could find is a patent that seems to focus on collapsing multiple loops into a single loop in order to accelerate vision calculations.
The “programmable” qualifier presumably means that firmware engineers can customize this chip to their specific needs.
The deep learning accelerator is an open-source architecture NVIDIA has released to create accelerators for neural network inference. It’s really cool that NVIDIA has open-sourced this technology.
As with the PVA, the DLA appears to be programmable with Verilog, so that customers can adapt the firmware to meet their needs.
Most likely a goal of the DLA is to provide acceleration of lidar and other data that may not be optimized for the other vision-optimized chips on the Xavier.
That is a lot of processing power and specialization on one SoC!
Here’s NVIDIA CEO Jensen Huang touting the DRIVE AGX Xavier Developer Kit, which contains two Xavier SoCs.