GTC 2019 | New NVIDIA One-Stop AI Framework Accelerates Workflows by 50x

Synced
SyncedReview
Published in
3 min readMar 19, 2019

No wow moments, no bells, and no whistles. Jensen Huang has delivered some groundbreaking keynote speeches in his years at the helm of NVIDIA, but today’s was not among them. When the US chip giant’s Co-Founder and CEO took to the stage at the San Jose State University to kick off the company’s annual GPU Technology Conference (GTC) he did not announce a new graphics card, nor did he unveil a rumoured (and long-awaited) new 7nm GPU architecture.

Huang did however have something up his sleeve for AI developers and data scientists: CUDA-X AI, an end-to-end platform that combines all NVIDIA libraries into one bundle to streamline and accelerate data science workflows by as much as 50 times.

Since its founding in 1993 NVIDIA has built up various libraries and tools to help data scientists more quickly train and deploy AI models using GPUs. For example, cuDNN is a GPU-accelerated library of primitives for deep neural networks, and TensorRT is a GPU-accelerated neural network inference library for building deep learning applications. There are also countless tools outside NVIDIA’s infrastructure that researchers can use to speed up AI workflows, such as TensorFlow as a machine learning library and SageMaker as an Amazon Web Services model deployment tool.

CUDA-X AI is designed to pack dozens of NVIDIA GPU-acceleration libraries, ranging from data processing to model implementation, into a one-stop shop. The idea is to reduce friction between different steps in the workflow and maintain consistency throughout the evolving AI development process.

Huang even coined a term for this innovation: Programmable Acceleration of multiple Domains with one Architecture, or PRADA.

“Wherever in the stack, you want to code that’s great; you want to use domain-specific libraries, or AI framework and software packages, it’s all good for us,” VP and General Manager of NVIDIA Accelerated Computing Ian Buck told Synced.

A key component of CUDA-X AI is RAPIDS, a GPU-acceleration platform for data science and machine learning which enables end-to-end data science and analytics pipelines running entirely on GPUs. Incubated by NVIDIA for years, RAPIDS features low-level compute optimization, GPU parallelism and high-bandwidth memory speed.

Microsoft has shown a keen interest in RAPIDS. Also announced today was Microsoft Azure Cloud Service’s adoption of NVIDIA RAPIDS. The advantage is obvious, as Microsoft claims an impressive 20 times performance speed up using four NVIDIA GPUs and RAPIDS for model training compared to traditional CPU solutions. Another early adopter is Walmart, which uses RAPIDS to improve the accuracy of its forecasts.

CUDA-X AI supports major deep learning frameworks such as TensorFlow, PyTorch and MXNet; and will be integrated into all the data science workstations and the NVIDIA T4 servers announced at GTC today.

The NVIDIA GTC 2019 runs through Thursday March 21 in Silicon Valley.

Journalist: Tony Peng | Editor: Michael Sarazen

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global