On Device AI: The next ‘SMART’ thing

Parul Pandey
The Startup
Published in
8 min readOct 26, 2018

--

The benefits of using On-device AI, its challenges and the future.

“black cordless headphones beside closed black laptop computer and smartphone” by Christopher Gower on Unsplash

By 2022, 80 per cent of smartphones shipped will have on-device AI capabilities, up from 10 per cent in 2017 — Gartner

Artificial Intelligence is no more a sci-fi word today. It has become an indispensable part of our existence in the form of smartphones, smartwatches, tablets, to name a few. Our lives today revolve around these devices to an unimaginable extent. Use of virtual Personal Assistants like Siri and Cortana is on the rise. We would be helpless if Google maps weren’t there to guide us. In short, AI is advancing rapidly and is changing the way we lead our lives. The smart devices today are way ‘smarter’ than their predecessors. The rapid improvements in the software and hardware domains have triggered an era where Intelligence is moving from the cloud onto the device and revolutionizing our lives.

AI, Machine Learning & Deep Learning

AI is a superset that encompasses both Machine learning and Deep learning. Francois Chollet in his book ‘Deep Learning with Python’ describes Artificial Intelligence as the ability to automate intellectual tasks usually performed by humans. However for any machine to exhibit Artificial Intelligence, it needs to learn from the past experiences which come in the form of datasets called training data. These learnings are then used to predict results for unseen test data.

source

So, inherently, AI is a broader term, but for any machine to exhibit Artificial Intelligence, they need to be trained using Machine Learning Algorithms.

“Machine learning is the science of getting computers to act without being explicitly programmed.” — Stanford.

Deep learning is a subfield of Machine learning which is a new a new take on learning representations from data that emphasizes learning successive layers of increasingly meaningful representations. Deep learning has been able to achieve a lot of success in fields like Image classification, Speech Recognition, handwriting transcription, Improved machine translation to name a few and the research in this field is going on for even more significant results.

Cloud-Based AI

Source

Since the Deep learning algorithms work on a large amount of data, most of these operations take place on the cloud. For instance, how does a voice recognition task work on your phone? Our voice is recorded on the phone and then sent up to the cloud on to a Machine learning Server. It is at the server where entire processing takes place and the result generated is pushed on to the smartphone. Cloud based AI can have both advantages and disadvantages.

Advantages

  • Access to a large amount of data on servers and we know it is the data that drives AI
  • AI on cloud takes data analysis to the next level. This is because there is a plethora of historical and current data available within a cloud environment. AI learn patterns from data and makes recommendations which tend to be nearly accurate.
  • Cost Saving is an essential aspect of using cloud-based services. Organizations need to spend money only on the storage they need — when they need it.

Disadvantages

However, cloud-based AI also comes with certain drawbacks.

  • Latency, i.e., the time lag between input sent and output/results obtained especially in cases of autonomous automobiles where results need to be achieved instantly.
  • Privacy concerns which may arise by sending data on the cloud.
  • Reliability issues

The need for an On-Device AI

It is a common belief that AI is all about Big data and cloud. On the contrary, AI can also be localized , right in the palm of our hands in form smartphones.

There has been a constant movement of AI towards the edge devices. This has been possible due to an increase in computing power coupled with improvements in AI algorithms and production of robust hardware and software. These advancements have made it possible to run Machine Learning solutions on smartphones and automobiles rather than in the cloud, and this trend is on the rise.

https://www.qualcomm.com/news/onq/2017/08/16/we-are-making-device-ai-ubiquitous?cmpid=oofyus181544

Benefits of using AI locally

On-device AI works without having to go back to a server and consumes less of your battery life.

AI is becoming ubiquitous today. The AI-based services are slowing moving towards personalized experiences in household devices, vehicles, IoT, etc. Having on device AI has some tremendous benefits regarding:

  • Performance

Undoubtedly, on-device processing is much faster than the cloud since it saves the trip from phone to server and back. This is an important factor because some used AI cases cannot afford latency. This can be especially useful in case of autonomous vehicles where the vehicle needs to apply brakes and cannot afford lag of even a second.

  • Privacy

Today a smartphone stores a lot of our sensitive data in the form of fingerprints, iris scans, voice identifications, etc. having all this data on the device ensures security which can be compromised on the server.

  • Reliability

Network connectivity is an issue in many parts of the world. This means it would become difficult to fetch data from the servers at locations of bad and no signals. On- device AI can provide a solution to this reliability problem.

  • Network bandwidth

On device AI also conserves network bandwidth. Regularly sending data back and forth to the cloud will bring a sharp increase in network bandwidth.

  • Power Saving

Power saving is also an important concern for phones today. Having AI locally will save power — both on the phone and in the server room — since the phone is no longer using its mobile radios to send or receive data and a server isn’t being used to do the processing hardware

Hardware Improvements

Running on device AI is no mean task. Power and thermal efficiency are two areas which need paramount attention. AI workloads can be the memory and computationally intensive. Again the power, thermal and size constraints of the devices have to be kept in mind. Therefore breakthrough advancements and innovations are needed in both hardware and software areas to achieve on device AI. Various companies have been investing in this field for quite some time now and have come up with remarkable solutions. Some of the major players in this domain are:

  • Qualcomm

Qualcomm vision is to Make on-device intelligence pervasive. Their focus has been on high-performance HW/SW and optimized network design. Qualcomm Snapdragon Mobile Platforms have been the SoC of choice for the highest performance mobile devices. Qualcomm Hexagon Digital Signal processor is being used extensively to address AI workloads. Along with hardware improvements, Qualcomm has also come out with a Snapdragon Neural Processing Engine (NPE) Software Developer Kit (SDK). This features an accelerated runtime for on-device execution of convolutional neural networks (CNN) and recurrent neural networks (RNN) — which are great for tasks like image recognition and natural language processing,

  • Huawei

Chinese giant has developed Kirin 980, a seven-nanometre mobile chipset equipped with Artificial Intelligence (AI) capabilities. In addition to improving the performance of your device, Kirin 980 also has the potential to improve both network connectivity and data download speeds.

Source

Kirin 980 supports common AI frameworks such as Caffe, Tensorflow and Tensorflow Lite, and provides a suite of tools that simplify the difficulty of engineering on-device AI, allowing developers to easily tap into the leading processing power of the dual NPU.

  • Google

Google has also joined the bandwagon to take its AI expertise from cloud to device. Google’s new Edge TPUs is progress in that direction. The Edge TPU; a tiny AI accelerator that will carry out machine learning jobs in IoT devices.

In its latest offering of Pixel 3, Google uses On device AI to bring the best AI experiences to the users. Google piloted this technology last year with Now Playing, bringing automatic music recognition to Pixel 2 and this year, the Phone app and camera both use on-device AI to enable users to interact more seamlessly with the world.

“We want to empower people with new AI-driven abilities. With our advances in on-device AI, we can develop new, helpful experiences that run right on your phone and are fast, efficient, and private to you” — Google

ARM and Mediatech are also designing chips for this sort of on-device AI task and making their own AI accelerators whereas NVIDIA’s GPU is dominating the market for training algorithms.

Challenges

Even though the concept of On -Device AI looks promising, it comes with its own set of challenges. The entire idea of cloud-based services is to allow the tasks that require a lot of computation and data, to be handled by the hardware apt for it. Cloud not only offers expertise in this regard but also provides resources to achieve the same. On-device AI is still in its development stage but will need time to mature. Breakthroughs will be required to achieve what has been planned. Other than that there are obvious constraints relating to power consumption, thermal issues and size constraints. An increase in the size of a neural network comes with an exponential rise in its power demands thereby affecting the size of the device.

Conclusion

There is no denying the fact that On Cloud AI is still very necessary and important . They offer the best solutions for Big data problems and running machine learning algorithms on the cloud comes with a lot of advantages. However, on-device AI can have a powerful role for real-time and sensitive applications which cannot afford latency or information leak.

This story is published in The Startup, Medium’s largest entrepreneurship publication followed by + 382,862 people.

Subscribe to receive our top stories here.

--

--

Parul Pandey
The Startup

Principal Data Scientist @H2O.ai | Author of Machine Learning for High-Risk Applications