Intel AI Summit 2019 | Chip Giant Accelerates AI at the Edge

Synced
SyncedReview
Published in
5 min readNov 14, 2019

At the Intel AI Summit in San Francisco on Tuesday the company put its chips on the edge, revealing its next-generation Movidius Myriad Vision Processing Unit (VPU) for edge media, computer vision and inference applications. Intel also introduced its new Edge AI DevCloud with the OpenVINO toolkit for edge devices, and demonstrated its Nervana Neural Network Processors for training (NNP-T1000) and inference (NNP-I1000).

The Nervana Neural Network Processors (NNP) are the first purpose-built ASICs for complex deep learning with incredible scale and efficiency for cloud and data center customers.

“We’re one of the largest (AI companies) due to our breadth and depth that allows us to go from data center out to the edge. And we anticipate this growing year on year and this technology transition unfolds. But the most important things we’ve learned is that there really is no single approach for AI,” said Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group. Rao also stressed the necessity of purpose-built hardware like Nervana NNPs and Movidius VPUs to handle AI’s increasing workloads: “With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory.”

The new Intel products strengthen the company’s growing portfolio of AI solutions, which is expected to generate more than US$3.5 billion in revenue in 2019. The company aims to provide AI solutions for a range of industries and at any scale.

Intel’s next-generation Movidius VPU, “Keem Bay,” is a low-power, high-performance edge inferencing product that’s scheduled to become available in the first half of 2020. It incorporates efficient architectural advances to deliver more than ten times the inference performance as the previous generation with up to six times the power efficiency of competitors’ processors.

Keem Bay performance compared to NVIDIA TX2, Xavier and Ascend 310.

The DevCloud for the Edge, used by over 2700 enterprises, and the OpenVINO toolkit both address a key pain point for developers — allowing them to try, prototype and test AI solutions on a broad range of Intel processors before they buy hardware.

“Customers are rapidly adopting AI at the edge because the economic and social advantages are just too large to ignore. It’s happening in every industry from smart cities, to industrial, healthcare, and retail,” said Intel corporate vice president of IoT Jonathan Ballon. Intel hopes its hardware and software innovations can not only accelerate AI performance, but also make it easier to attain.

Now in production and also being delivered to customers next year, Intel Nervana NNPs are part of a systems-level AI approach offering a full software stack developed with open components and deep learning framework integration.

Nervana Neural Network Processor (NNP) for training (NNP-T1000)

The Nervana NNP-T strikes a balance between computing, communication and memory, enabling near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers. The Intel Nervana NNP-I meanwhile is power- and budget-efficient and ideal for running intense, multimodal inference at real-world scale using flexible form factors. Both products were developed for the AI processing needs of leading-edge AI companies like Baidu and Facebook.

Intel AI Inference Products Group General Manager Gadi Singer identified three highlights of the NNP family as power efficiency, versatility, and scale.

“In any environment you are limited by power, and power is also a major factor in the TCO (total cost of ownership) of computing. Power efficiency helps you physically put things more dense,” Singer told Synced in a press briefing.

“The second thing is versatility. Some of the solutions that we see are solving a particular problem like for image recognition, which is a very popular use. What we had as a driving force from the beginning is that it must support multiple usages.”

Singer says when it comes to scaling capability, “hardware software optimization together is a must.” NNP architecture is structured with building blocks, which makes it flexible and easy to interconnect. Intel is also working hard to customize its software to match hardware from the top down. “I actually have more of my team working on software than working on hardware.”

Facebook’s AI System Co-Design Director Misha Smelyanskiy told the summit audience: “We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana NNP-I and to extend support for our state-of-the-art deep learning compiler Glow to the NNP-I.”

Ballon also announced the first edge AI nanodegree that Intel will be offering in association with online education platform Udacity to provide industry practitioners — even those without a background in computer sciences — with the skills required to develop their own AI models where data is generated at the edge.

To create more opportunities for women in technology and AI, the company is making 750 scholarships available, mostly for the international non-profit organization Women Who Code, which provides services for women pursuing tech careers and a job board for companies seeking female coding professionals.

Journalist: Yuan Yuan | Editor: Michael Sarazen

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global