TensorFlow Image Classifiers on Android, Android Things, and iOS

Dan Jarvis
Capital One Tech
Published in
5 min readNov 10, 2017

The TensorFlow repository contains a selection of examples, including sample mobile applications, for Android and iOS. This article compares the TensorFlow image classifier on Android, Android Things, and iOS.

1 — Android

As you’d probably expect from an open source project developed by Google, TensorFlow currently has more sample apps available for Android than iOS. The README explains them all, but today we’re just looking at the image classifier (app pictured on the left).

Here’s a demo video of it in action:

https://www.youtube.com/watch?v=4oU4N6bAjR4

It can only classify items that it has been trained on (as explained here), and it generally does a good job.

Pro tip: you can press the hardware volume down button to see diagnostics information on screen like so:

In the above image you can see:

  • The inference time (bottom left).
  • Preview of the square cropped image used for inference (bottom right).

If you want to know more about how this app works behind the scenes, see Using a Pre-Trained TensorFlow Model on Android.

Performance

~200–300ms per inference on a 2015 Nexus 5.

~100–400ms per inference on a Samsung S8+.

Size

The app is 99MB (includes all four sample apps):

  • Inception model = 53MB.
  • Libraries = 11MB to 17MB per architecture.

Prerequisites

  • Android Studio.

For testing on a real device:

  • Enable Developer Options and Enable USB debugging on the device by following these instructions.

Building

  • Open Android Studio.
  • Press play.

If you don’t want to build the sample app, you have two options:

2 — Android Things

In addition to the Android and iOS sample apps in the TensorFlow examples folder, there is also a sample Android Things image classifier which comes with an excellent Android Things Image Classifier codelab to walk you through all the steps.

If you have a hardware screen, you’ll be able to see the photos and classifications there (as above).If you don’t have a hardware screen, you can view the logs in Logcat. You’ll see lots of noisy logs, but the ImageClassifierActivity log is the one to look for:

...
01-01 00:01:12.596 714-756/com.example.androidthings.imageclassifier D/ImageClassifierActivity: Got the following results from Tensorflow: [[578] remote control (91.3%)]
...

I tried taking photos of a few different objects, with varying success:

  • A remote control, laptop screen and water bottle all worked very well.
  • The developer board (i.e. doing a hardware selfie) was classified as holster or switch.
  • Taking photos of people didn’t work. Apparently this is because the early versions of the Inception image classifier model have not been trained on pictures of people, so I guess this is “working as designed” for the moment. :-)

Performance

~2000–5000ms per inference on a Pico board.

Size

The app is 69MB:

  • Inception model = 53MB.
  • Libraries = 11MB per architecture (armv7 and arm64).

Prerequisites

Detailed instructions for the hardware you need, and the steps to set it up, are all in the Android Things Image Classifier codelab. I’ve just listed the highlights here.

  • Hardware (developer board, camera, Rainbow HAT, USB C cable). A screen is optional.
  • The OS image for the hardware.
  • Android Studio 3.0+.

In my case I was using a Pico Pro developer board.

Building

  • Connect everything up:
  • Flash the OS image.
  • Press play in Android Studio.
  • Reboot board (needed to grant camera permission).
  • Press play in Android Studio again.

3 — iOS

There are three sample iOS apps in the TensorFlow repository.

If you don’t have access to a real iOS device, then you’ll only be able to build and run the simple and benchmark projects.

The simple project loads a single image of Grace Hopper, and classifies it, resulting in 51% confidence that it sees “military uniform”, and 10% confidence it sees a “mortarboard”.

The benchmark project is the same, except it also prints out profiling information.

The camera project is basically the same as the Android TF Classify app. It provides super fast, real-time image classification with an additional option to freeze frame it:

Performance

~50ms per inference on an iPhone 7.

Size

The app is 98MB:

  • Inception model = 53MB
  • Libraries = 11MB per architecture (armv7 and arm64)

Prerequisites

The camera sample needs to run on a physical iOS device. If you aren’t familiar with the iOS development practices, you might find some of these steps tricky.

  • Xcode 7.3+.
  • Install CocoaPods (pod).

For testing real devices:

  • Apple Developer Account — $99 / year.
  • Set up some signing certificates and provisioning profiles.
  • Provision your test devices with Apple.

Note — If you’re wondering why you can’t just download a demo app, it’s because the Apple App Store currently does not allow demo/sample apps.

Building

The README provides all the detailed steps, but in summary:

  • Download model, run pod install, downloads ~800MB
  • Open .xcworkspace file (.xcodeproj gives linker errors)
  • Press play

For testing real devices:

  • Select your signing identity in Info.plist

Conclusion

This has been a quick walk through of some of the TensorFlow image classifiers available on Android, Android Things, and iOS. For a full listing and explanation of their respective offerings, check out:

If you liked this article you might enjoy the video of my talk on Applied TensorFlow in Android Apps.

DISCLOSURE STATEMENT: These opinions are those of the author. Unless noted otherwise in this post, Capital One is not affiliated with, nor is it endorsed by, any of the companies mentioned. All trademarks and other intellectual property used or displayed are the ownership of their respective owners. This article is © 2017 Capital One.

--

--