Selfie2Anime with TFLite — Part 3: Android App

Margaret Maynard-Reid
Google Developer Experts
5 min readJul 15, 2020

Written by ML GDEs Margaret Maynard-Reid | Reviewed by Sayak Paul, Khanh LeViet and Hoi Lam

This is part 3 of an end-to-end tutorial on how to convert a TF 1.x model to TensorFlow Lite (TFLite) and then deploy it to an Android for transforming an selfie image to a plausible anime. (Part 1 | Part 2 |Part 3) The tutorial is the first of a series of E2E TFLite tutorials of awesome-tflite.

In Part 2 we got a TFLite model, and now we are ready to deploy the selfie2anime.tflite model to Android! The Android code is on GitHub here.

Here are the key features of the Android app:

  • Jetpack Navigation Component for UI navigation
  • CameraX for image capture
  • ML Model Binding for importing the tflite model
  • Kotlin Coroutine for async handling of the model inference

Here is the TFLite model implementation on Android step-by-step:

0. Download Android Studio 4.1 Preview

1. Create a new Android project and set up UI navigation

2. Set up theCameraX API for image capture

3. Import selfie2anime.tflite model with ML Model Binding.

4. Putting everything together:

  • Model input: capture selfie image with CameraX
  • Run inference on selfie image and create an anime
  • Display both the selfie image and the anime image in the UI
  • Use Kotlin coroutine to prevent the model inference from blocking UI main thread

0. Download Android Studio 4.1 Preview

We will install Android Studio Preview (4.1 Beta 1) in order to use the new ML Model Binding feature to import a .tflite model and auto code generation. You can explore the tfllite models visually and also use the generated classes directly in your Android projects.

Download Android Studio Preview here. You should be able to run the Preview version side by side with your stable version. Make sure to update your Gradle plug-in to at least 4.1.0-alpha10 otherwise the ML binding menu may be inaccessible.

1. Create a new Android Project

First let’s create a new Android project with an empty Activity called MainActivity which contains a companion object that defines the output directory where the image (captured with CameraX) will be stored.

Use Jetpack navigation component to navigate in the app. Please refer to my tutorial here to learn more details about this support library.

There are 3 screens in this sample app:

  • PermissionsFragment — handles checking of the camera permission
  • CameraFragment — handles camera setup and image capture
  • Selfie2animeFragment — handles the display of selfie and anime image in the UI

The navigation graph in nav_graph.xml will define the navigation of the three screens and data passing between CameraFragment and Selfie2animeFragment.

2. Set up CameraX for image capture

CameraX is a Jetpack support library which makes camera app development much easier.

Camera1 API was simple to use but it lacked a lot of functionality. Camera 2 API provided more fine control than Camera 1 but it’s very complex — with almost 1000 lines of code in a very basic example.

CameraX on the other hand, is much easier to set up with 10 times less code. In addition, it’s lifecycle aware so you don’t need to write the extra code to handle the life cycle.

Here are the steps to set up CameraX for this sample app:

  • Update build.gradle dependencies
  • Use CameraFragment.kt to hold the CameraX code
  • Request camera permission
  • Update AndroidManifest.ml
  • Check permission in MainActivity.kt
  • Implement a viewfinder with the CameraX Preview class
  • Implement image capture
  • Capture an image and convert it to aBitmap

Once we capture an image, we will convert it to a Bitmap which we can pass it to the TFLite model for running inference. Navigate to a new screen Selfie2animeFragment.kt where both the original selfie and the anime image are displayed.

3. Import TensorFlow Lite model

Now that the UI code has been completed. It’s time to import the TensorFlow Lite model for inference. ML Model Binding takes care of this with ease. In Android Studio, go to File > New > Other > TensorFlow Lite Model.

  • Specify the selfie2anime.tflite file location.
  • “Auto add build feature and required dependencies to gradle” is checked by default.
  • Make sure to also check “Auto add TensorFlow Lite gpu dependencies to gradle” since the selfie2anime model is quite slow and we will need to enable GPU delegate.

This import does two things:

  • automatically create a ml folder and place the model file selfie2anime.tflite file under there.
  • auto generate a Java class called Selfie2anime.java under the folder: app/build/generated/ml_source_out/debug/com/tflite/selfie2anime/ml, which will handle all the tasks such as model loading, image pre-preprocess and post-processing, and run model inference for converting the selfie image to an anime image.

Once the import completes, we see the selfie2anime.tflite display the model metadata info as well as code snippets in both Kotlin and Java that I can just copy and paste in order to use the model:

4. Putting everything together

Now that we have set up the UI navigation, configured CameraX for image capture, and the selfie2anime.tflite model is imported, let’s put all the pieces together! First we capture a selfie image with CameraX in CameraFragment.kt under imageCaptue?.takePicture(), then in onCaptureSuccess() there is an ImageProxy returned. We convert the ImageProxy to a Bitmap and then save it to an output directory defined in MainActivity earlier.

With the JetPack nav component, we can easily navigate to Selfie2animeFragment and pass the image directory location as a string argument.

Then in Selfie2animeFragment.kt, retrieve the file directory string where the selfie was stored, create an image file then convert it to be Bitmap which can be used as the input to the tflite model.

Run model inference on the selfie image and create an anime. We display both the selfie image and the anime image in the UI.

Note: the inference takes quite a long time so we use Kotlin coroutine to prevent the model inference from blocking the UI main thread. Show a progressBar till model inference completes.

Here is what we have once all pieces are put together:

This brings us to the end of the tutorial. We hope you have enjoyed reading it and will apply what you learned to your real-world applications with TensorFlow Lite. If you have created any cool samples with what you learned here, please remember to add it to awesome-tflite!

--

--

Margaret Maynard-Reid
Google Developer Experts

ML GDE (Google Developer Expert) | AI, Art & Design | 3D Fashion Designer