What’s the deal with “AI chips” in the Latest Smartphones?

Neuromation
Neuromation
Published in
6 min readNov 21, 2018
Image source

You may have heard a lot of hype recently about the new AI chips that are making their way into the latest smartphones. According to a common recent headline, they ‘Put the Power of Neural Networks in the Palm of your Hand!’. The major manufacturers have all given their AI chips cool names as well. The iPhone has a ‘Bionic Chip’ with a ‘Neural Engine’ in it. Huawei’s new phone has a ‘Neural Processing Unit (NPU)’.

But what do these chips actually do? What do they mean for consumers today? Buzz aside, we can see there is a major AI hardware trend happening in mobile right now. Samsung, Qualcomm, ARM, Nvidia, as well as the previously mentioned Apple and Huawei, all have brand new mobile AI chips. Google is also adding an API to Android to “tap into silicon-specific accelerators” and Apple has opened up its AI framework to developers.

So, what will the presence of these chips mean for trends in artificial intelligence development and AI solutions in the future? What solutions will be possible in a world of AI hardware-accelerated smartphones that weren’t possible in the past?

The fact is that smartphones have been taking advantage of AI for some time, but the processing has either been done by the main CPU/GPU chipset or offloaded to the cloud. One clear advantage of the new generation AI chips is that they will allow for processing to be done onboard the device, lowering response times vs offloading compute to the cloud. They will also allow for processing to be done more efficiently, reducing power consumption and increasing battery life.

Another benefit of on-device AI hardware is data privacy and security. Personal biometric data or other sensitive data can remain on the device and doesn’t need to be sent over the internet for processing in the cloud. This allows for a data owner to retain complete control over their information. In essence, you can bring the model to the data and not the data to the model.

Future applications that benefit from on-device ‘edge inference’ may allow networks of AI chip-enabled smartphones to work in concert, each using private locally resident data to train shared models, allowing for large amounts of personal data to be aggregated while maintaining security and privacy.

While such applications are still on the drawing board, today’s use cases can be slightly more prosaic. Apple demonstrated the potential for its Bionic Chip’s Neural Engine to map a users face onto an animated poo emoji at a recent event (to great applause). But these are still early days.

As the capabilities of these chips have only recently been made available to developers, the most compelling third-party AI-enhanced apps are likely still only now being developed. A lack of adequate toolsets to enable the majority of software developers to undertake real AI projects is another major bottleneck, as is the lack of availability of relevant, low cost and adequately labeled training data. As a result, today most AI functionality in existing smartphones is restricted to the phone manufacturers proprietary apps and functions as they have the budgets, expertise, and access to data today.

One example of this type of AI-powered OEM created application is Apple’s Face ID functionality. Facial recognition has long been a major field of study and development in deep learning computer vision systems. In an extremely informative blog post from November 2017 called An On-device Deep Neural network for Face Detection, Apple discussed the challenges and opportunities of deep learning approaches for Face ID.

“With the advent of deep learning and its application to computer vision problems, the state-of-the-art in face detection accuracy took an enormous leap forward… [but] compared to traditional computer vision, the learned models in deep learning require orders of magnitude more memory, much more disk storage, and more computational resources. As capable as today’s mobile phones are, the typical high-end mobile phone was not a viable platform for deep-learning vision models. Most of the industry got around this problem by providing deep-learning solutions through a cloud-based API. In a cloud-based solution, images are sent to a server for analysis using deep learning inference to detect faces…

Apple’s Computer Vision Machine Learning Team goes on to describe the specific techniques they used to overcome the problems with running a deep convolutional network on the device.

“Combined, all these strategies ensure that our users can enjoy local, low-latency, private deep learning inference without being aware that their phone is running neural networks at several hundreds of gigaflops per second,”

Besides biometric security systems that require high security and accuracy like Face ID, Apple and the other major smartphone manufacturers are also using deep learning to identify elements in photographs, incuding objects, people, faces and scenes to optimize the camera to take the best pictures in a wide range of situations and conditions.

Huawei has been an early leader in this field. Their Kirin AI chip beats out processors from Qualcomm and Apple on a number of benchmarks and their R&D budget last year was actually slightly higher than Apple’s at $11.75bn USD.

AI functionality on their latest flagship phone can now recognize over 500 separate scenarios. They are also leaders in developing AI image stabilization (AIS), which they accomplish using their Neural Processing Unit chip to predict and react to shaky hand movements for each individual frame. This technology also allows for longer shutter speeds for remarkable and previously impossible hand-held night shots.

Other AI enabled functions for the latest smartphones could improve the user experience in ways that are almost undetectable — we may not be aware what is happening, but our phones will just be working much more smoothly and it will be due to hardware accelerated artificial intelligence working behind the scenes.

Some ways this might happen are improvements in natural language processing (NLP) deep learning models that predict word choice and intention for text entry keyboards, for personalized news aggregators, for map navigation and real-time translation.

An early poster-child for AI functionality on smartphones was Apple’s launch of Siri, which has now been duplicated by multiple smartphone manufacturers in the form of more generic ‘virtual assistants’. AI processing for these voice assistants has been and may continue to be done in the cloud but onboard AI acceleration hardware should allow for improved personalization, recognizing local place names, learning names of friends and family, and even learning individual accents and speech patterns — all with higher accuracy, lower latency and lower power consumption. If Siri and her cousins have sometimes seemed like frustrating science experiments in the past, that could be in the processing of changing with the help of AI chips.

Hardware acceleration is of course only one piece of the puzzle — optimizing of models to run efficiently and to create algorithms capable of running locally will be the result of a lot of hard work and coordinated effort. Likewise, access to properly labeled training data will be an obstacle to many app developers looking to take advantage of this new hardware. (a problem that Neuromation is looking to help solve with its focus on synthetic data and industry leading developer toolsets).

Given the recency of the appearance of AI chips in smartphones, it is safe to say we are still only in the first stage of this technology trend. The first stage may be less immediately visible and is focused on basic smartphone functionality like the camera, text entry, map navigation and search — and will initially show up in improved accuracy, security and power consumption. But further development will begin to see this revolution spread to brand new app experiences that may have been previously impossible and will provide for a level of personalization, prediction and accuracy that we haven’t experienced before.

By Angus Roven,

Neuromation Investor Relations Analyst

--

--