TensorFlow Dev Summit key takeaways for Android Developers

Hoi Lam
Android Developers
Published in
5 min readMar 20, 2019

--

TensorFlow Dev Summit

Two weeks ago, at the TensorFlow Dev Summit a number of exciting new developments for Android (session recordings) were announced. They included GPU delegate acceleration (2-7x faster vs CPU), expanded documentation and new code labs showcasing how to use TensorFlow Lite models on Android. This post highlights some of these announcements and also summarizes some interesting conversations I had with developers.

GPU Acceleration brings On-device ML to the masses

Machine learning headlines often focus on the latest breakthroughs on the newest, most powerful devices. For mobile developers, what’s actually important is being able to deliver a good user experience across a wide range of phones. That’s why the Android developers I talked to are really excited about how much the new experimental TensorFlow Lite GPU delegate can accelerate inference (running ML models) on devices with OpenGL ES 3.1, which was introduced as part of Android API Level 21 (Lollipop).

To use it, developers need to update their app’s build.gradle file:

When importing the TensorFlow model interpreter, use the following:

This functionality is experimental and the TensorFlow team would love to hear from the developer community on how this could be improved. In addition, the team is working on open sourcing this soon. More details can be found from the GPU delegate launch blog post.

Expanded documentations and new Android samples

The TensorFlow team expanded their documentation as part of the TensorFlow 2.0 alpha release. My personal favourite is the new TensorFlow Lite examples section. Here you will find a range of Android (and iOS) examples for integrating different types of models including object detection and speech recognition. Before these samples existed, tasks such as image transform (to draw the analysis result on top of the camera image) could be complex to implement. Now it’s easier. For example, the object detection sample comes with tracker code and an image utility for common transformations.

TensorFlow Lite samples

TensorFlow Lite, ML Kit and NNAPI

Google provides a wide range of on-device machine learning developer products. For most developers, the two typical starting points are:

If you’d like to delve deeper into how to use TensorFlow (and other ML tools) here’s a more detailed overview of your development options (in order of complexity):

  • Use a Google pre-trained model (e.g. face detection) — ML Kit Base APIs
  • Use other pre-trained models — Developers can use pre-trained TensorFlow Lite models, such as pose estimation (body and limbs detection) and image segmentation (person vs background)
  • Make your own custom model:
  1. Training the custom model — TensorFlow
  2. Converting and running the finished model on Android / iOS / IoT devices — TensorFlow Lite
  3. Serving the converted TensorFlow Lite model to end users — ML Kit Custom model serving

All these products work together to deliver an end-to-end experience for developers. For example, ML Kit Base APIs use TensorFlow Lite models behind the scenes to deliver a Google trained model to your app. When these TensorFlow Lite models run on mobile devices, their performance is enhanced through NNAPI.

It is very rare that developers would need to interact with NNAPI. The only exception would be if you would like to create your own machine learning framework.

On-device, New use-cases

The expansion of on-device TensorFlow capabilities provides new opportunities for mobile developers in two ways:

1. Running more powerful models on-device

At the TensorFlow Dev Summit, we showcased an object detection sample where the model not only detected the object but also returned the location of the object in the image.

A number of attendees were surprised by how powerful mobile devices (Android and iOS) have become to make such functionality available on-device. There are many advantages for running models on-device such as offline access and privacy. Another benefit of running the model locally is reduced latency, making realtime experiences possible.

2. Mobile as a general purpose ML devices

For years, there have been hopes that Internet of Things (IoT) devices, such as industrial machines or smart home gadgets, can bring ML to the edge of the internet. This has been given a boost with the introduction of Edge TPUs at the Dev Summit. What is even more interesting for me is that these IoT devices are likely to be single purpose — e.g. sorting avocado sizes, or setting comfortable room temperature. These are the kinds of valuable use-cases the industry has been thinking about for a long time.

What is new is that with mobile platforms such as Android, we suddenly have a general purpose ML device. ML can help the user log physical activity one minute then scan a driving licence in another. Opportunities like this seem to be under-explored. With mobile+ML, there will be new design patterns and new Android apps.

The best way to learn is to try

Many of the attendees I met are more used to thinking about ML in supercomputer terms, not mobile. They are excited to see a new platform that provides on-device machine learning and promises new discoveries yet to come.

I would encourage Android developers to check out the Android samples to get a feel for what is possible:

On-device machine learning is more accessible and easier to get started with than ever. The next time you are faced with a problem, on-device ML might just be the solution.

--

--