The GPU-powered AI Revolution — NVIDIA GTC 2017

The recent surge of deep learning to the forefront of AI research was a result of the convergence of three things, according to Fei Fei Li of Stanford’s computer vision laboratory. The first piece was the breakthroughs in deep learning research from the likes Geoff Hinton and Yann LeCun. The second was the growing availability of enormous amounts of data for training. Finally, the advent of running neural networks on GPUs made training more complex deep networks computationally tractable. NVIDIA’s GPU technology has played a huge role in this success.

At the NVDIA GPU Technology Conference (GTC) last week, CEO Jensen Huang unveiled the company’s latest GPU technologies, focusing on its application in deep learning. Here’s a quick summary of the main takeaways of his keynote.

NVIDIA wants to power the AI revolution by building GPUs, containerizing the complex layers of deep learning models, and creating an open platform that is compatible with all deep learning frameworks and cloud providers.

The AI Revolution

  • The 2012 Big Bang of modern AI was made possible by several breakthroughs in research, enormous amounts of data, and the development of the use of GPUs to train neural networks. “Deep learning has democratized computing. Not everyone knows how to program, but everybody has their own data.”
  • Deep learning model size and complexity is exploding. “This is the ultimate high-performance computing problem, which is why we need to continue to push the limits of computing.”
  • Inception, NVIDIA’s AI and deep learning startup incubator, has worked with 1,300 startups in 18 months. Highlighted startups: Blue River (automated fertilization of farms) and MapD (GPU-powered database engine).

Product Announcements

  • Tesla V100: a chip that uses the new Volta GPU with Tensor Core technology which optimizes the GPU for matrix multiplications allowing for a 12x speedup of deep learning training.
  • NVIDIA DGX-1 with Tesla V100: an “essential instrument of AI research” that reduces processing time from days to hours and replacing 400 servers with a single box. There’s also the less powerful DGX Station, a personal AI supercomputer for your desktop, and the HGX-1, which puts this graphics accelerator into the public cloud.
  • Tensor RT for TensorFlow, a compiler for deep learning inference, resulting in a 15x speed-up. Low inference latency is important for AI applications that work in real-time. They make the case for GPU accelerated data centers in which 500 traditional CPU servers can be replaced by 33 GPU accelerated servers.
  • NVIDIA wants to make it easier to maintain the complex stack of systems and frameworks used in deep learning development. NVIDIA GPU Cloud containerizes the most popular deep learning frameworks, datasets, and models, and allows developers to start deep learning on GPU instances in minutes.
  • Xavier DLA processor is now open source to accelerate the proliferation of deep learning development.

Applications

  • Project Holodeck, a real-time, photorealistic models with interactive physics that allows for collaboration within a virtual reality. Here’s a cool demo showcasing the use of Holodeck for collaborative car design.
  • NVIDIA Drive PX, an open AI car platform that handles computer vision mapping to fully driverless systems. Toyota will be using this for its autonomous vehicles.
  • ISAAC Robot Simulator uses virtual sensors to pre-train robots in a virtual world. Demo here.

Partnerships

  • SAP is offer its first commercial AI for enterprise. It features brand impact analysis, service ticketing, and invoice-to-record applications driven by deep learning.
  • AWS is a launch partner of NVIDIA Volta.
  • NVIDIA GPUs will support Microsoft’s Cognitive Toolkit (CNTK) and their GPUs will continue to be used in Azure.

Watch the full keynote here.