NVIDIA CEO Says “FGPA is Not the Right Answer” for Accelerating AI

Synced
SyncedReview
Published in
3 min readMar 30, 2018

Accelerating resource-hungry AI applications demands chip performance beyond what mere CPU or GPU can deliver, prompting researchers to turn to sophisticated Application-specific Integrated Circuits (ASIC) and Field Programmable Gate Arrays (FPGA). Chip giant NVIDIA Founder and CEO Jensen Huang created a bit of a stir at yesterday’s GPU Technology Conference in Santa Clara, USA, when he appeared to dis one of these chips’ appropriateness for autonomous vehicle system development: “FPGA is not the right answer,” he said.

“FPGA is really for prototyping. If you want the [self-driving] car to be perfect, I would build myself an ASIC because self-driving cars deserve it,” says Huang.

FPGAs are logic chips best known for their programmability, which gives engineers the flexibility to configure an FPGA for example as a micro-control unit today, and use the same FPGA as an audio codec tomorrow. ASICs meanwhile are custom chips with little or limited programmability. Because FPGAs are more versatile, chip makers can streamline their operations by developing FPGAs rather than ASICs. However FPGAs are both more expensive and less powerful than ASICs.

“When you want to build something for cars, you should have a very large concentrated group of expert engineers design the chip one time and sell it to everyone, instead of a hundred random groups of different levels of capability and expertise build their own chips,” says Huang.

NVIDIA has never been impressed with FPGA. Chief Scientist Bill Dally once said “if you want to solve a problem and you are willing to devote a lot of engineering time, just develop the ASIC directly. I don’t think the FPGA is competitive.”

NVIDIA has been developing ASIC for years and has traditionally kept their tech under wraps. At last year’s GTC however the company decided to share the architecture of their Deep Learning Accelerator (DLA) — an ASIC for deep learning inferencing — on the open-source codebase Github.

NVIDIA recently announced an agreement with British chip IP company Arm to integrate DLA architecture into Arm’s new Project Trillium platform, which hastens the development of AI inferencing accelerators. As 90 percent of AI-enabled devices shipped today are based on architecture developed by Arm, NVIDIA’s DLA is expected to be deployed on billions of mobile, consumer electronics, and the Internet of Things (IoT) devices.

However even as NVIDIA snubs FPGA, rivals like Intel are ramping up efforts to develop and deploy them. In 2015 Intel acquired top US manufacturer of programmable logic devices Altera in an all-cash transaction estimated at US$16.7 billion. Intel has since developed a CPU+FPGA hybrid chip for deep learning inference on the cloud.

Intel FPGA

Intel also introduced its Movidius Myriad X Vision Processing Unit (VPU), a system-on-chip (SoC) used for vision devices such as smart cameras, augmented reality headsets and drones. The Myriad X is shipped with a dedicated Neural Compute Engine (NCE) for running deep neural networks at high speed and low power in real time at the edge. With NCE, the Myriad X can reach one trillion operations per second in deep learning inferencing.

Meanwhile, the world’s leading supplier of programmable logic devices Xilinx is competing with Intel’s Altera in the FPGA market. While Intel dominates the server chip market, Xilinx has the technology lead, helping the company win orders from large cloud customers.

Chinese startup DeepPhi last year garnered US$40 million in funding — led by Xilinx — to develop its Deep-Learning Processing Units (DPU), which include both FPGA chips and ASIC chips.

Journalist: Tony Peng| Editor: Michael Sarazen

Dear Synced reader, the upcoming launch of Synced’s AI Weekly Newsletter helps you stay up-to-date on the latest AI trends. We provide a roundup of top AI news and stories every week and share with you upcoming AI events around the globe.

Subscribe here to get insightful tech news, reviews and analysis!

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global