No Longer Just About Graphics, NVIDIA Highlights GPU AI, Business Applications

Unless you are a gamer, graphics developer or research scientist NVIDIA is probably isn’t on your radar. Even then, the most likely association is with big, expensive graphics cards for movie animator workstations or the types of gaming PCs all but the hardcore system builder have ditched for a PS4 or Xbox One. Graphics processing chips (GPUs) powered NVIDIA to nearly $5 billion in sales and a $12 billion market cap, but the company sees new applications like image and speech recognition and database acceleration powering the next phase of its growth. At NVIDIA’s premier developer event, the GPU Technology Conference (GTC), CEO Jen-Hsun Huang and his team decidedly emphasized broader and more innovative applications for the company’s parallel processing technology, notably as the brains processing sensor information to instruct autonomous cars. Indeed, Elon Musk’s appearance on the keynote stage served to underscore both the inevitability of self-driving vehicles (Musk: “In the future, we may outlaw driving cars. Can’t have a person driving a two-ton death machine.”) and NVIDIA’s positioning as a key automotive technology provider.

Source: Author

Huang’s keynote had plenty of details about new hardware and hints of NVIDIA’s technology roadmap, but the broad theme is the application of deep learning algorithms to a host of real world problems and how GPU technology can dramatically accelerate performance and bootstrap product development and deployment. Deep learning is the current descriptor for a class of brain-inspired neural network algorithms that recursively process raw data, whether image pixels, audio samples, or unstructured text, to ascertain patterns and meaning. Deep learning is what enables Google to automatically categorize photos or Facebook to tag faces in snapshots, but broader business applications are only beginning to be probed.

Accelerating more than graphics

The ‘G’ (graphics) label for NVIDIA’s main product is becoming an anachronism. Instead NVIDIA’s hardware, software and engineering output are manifested in algorithms and APIs, not circuits and interconnects. Its product, like the Tesla server GPU cards, which don’t even have a video output, are increasingly used to compute neural networks and process databases, not render virtual images. Indeed, GPUs are a disruptive technology for databases, business analytics and robotics that will allow unknown startups like those in the GTC Emerging Companies Summit and giant corporations like IBM and Baidu to reshape markets.

Although GTC is designed for engineers and developers, with content that’s often inscrutable to those not immersed in the particular specialties, other interesting themes emerged that have implications far beyond the R&D lab. Paramount and spotlighted by Huang is deep learning, a label for the application of increasingly detailed and complex neural network algorithms that adapt and automatically improve based on new, added information and that are particularly good at performing image and speech recognition. The link between deep learning research and NVIDIA is simple: its parallel processing technology, developed for graphics processors but now generalized via the CUDA platform, is tuned for algorithms that can be highly partitioned and parallelized and that benefit from a GPU’s very fast memory subsystem. Problems like image and speech recognition can be broken apart into easily computable chunks and reassembled into an answer. In essence, the algorithms for rendering graphical features and detecting them share common traits that GPUs are designed for.

NVIDIA pioneered the GPU, but now that the processor in every smartphone has more than enough horsepower to render Angry Birds or stream a movie, the company needs an outlet for its higher-end technology. Although there remains a niche market for ever more sophisticated graphics rendering, witness the special effects forthcoming in the next Star Wars movie, it’s not enough to fuel a growing company’s ambitions. That’s where deep learning and its applications, particularly to autonomous vehicles comes in.

Source: NVIDIA

That explains the prominence of several car companies and Musk’s keynote appearance. The man behind the world’s best electric vehicle technology was almost nonchalant when talking about the future of autonomous vehicles, stating that self-driving cars will soon become the norm. Indeed, Musk said society may one day outlaw driving one’s own car, quipping “We can’t have a person driving a two-ton death machine.” NVIDIA intends to be a key arms merchant for future car technology.

Yet real time vehicular image processing is merely one application for GPUs. NVIDIA also announced hardware and software systems designed to accelerate development for deep learning researchers and big data analysts with an eye to bootstrapping a new generation of applications based on its CUDA parallel computing platform and programming model.

Aside from car automation, one of the most interesting applications of GPU technology is to speech recognition and voice-to-text transcription. It’s appeal was palpable in the packed session, where the line started forming almost an hour ahead of time, by a researcher from Baidu on the use of deep learning neural networks to dramatically improve the speed and accuracy of speech recognition.

One application with tangible business benefits is the acceleration of database queries and analysis. A prime example is MapD, a four-person startup based on the grad school research of founder Todd Mostak that won last year’s GTC emerging startups award for its GPU-powered database cum data visualization platform. After a briefing and demo with Mostak and attending his GTC talk, it’s easy to see why. MapD’s technology has tremendous advantages for a wide range of mid-sized data problems that don’t require Hadoop-scale infrastructure. Mostak’s demo included slicing and dicing a 60 million record data set of airline flight information turned the task of data analysis into a real time, interactive experience.

Source: IBM, GTC

Other examples at GTC included an IBM talk and demonstration on a database system using NVIDIA Tesla GPUs paired with IBM’s POWER8 CPU that doubled query performance across a range of benchmarks and stealthy startup Graphistry with a presentation on using GPUs to power its data visualization engine.

GTC demonstrated that the use of GPUs to tackle a diverse set of computational problems is expanding to the point that the acronym itself is an anachronism: parallel processing unit is far more appropriate. With performance improvements accelerating as laid out in Huang’s keynote, expect to see a GPU powering more and more business applications.

Source: NVIDIA

Originally published at on March 20, 2015.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.