Selfie2Anime with TFLite — Part 1: Overview

AI for Art and Design

Margaret Maynard-Reid
Google Developer Experts
3 min readJul 15, 2020

--

Written by ML GDEs Margaret Maynard-Reid and Sayak Paul | Reviewed by Khanh LeViet and Hoi Lam

This is part 1 of an end-to-end tutorial on how to convert a TF 1.x model to TensorFlow Lite (TFLite) and then deploy it to an Android app for transforming a selfie image to a plausible anime.

This tutorial is divided into three parts and feel free to follow along with the end-to-end tutorial, or skip to the part that is most interesting or relevant for you:

  • Part 1: an overview and introduction of the Selfie2Anime project with TensorFlow Lite (this one).
  • Part 2: how to create a SavedModel and then convert it to a TFLite model. The model saving step is performed in a TensorFlow 1.14 runtime in which the model code was written although the same method can be applied to most models written in TensorFlow 1.x. The model conversion step is performed in a TensorFlow 2.x runtime in order to leverage the latest features in TFLiteConverter.
  • Part 3: Deploy the TFLite model to an Android application.
E2E Tutorial: Selfie2Anime with TensorFlow Lite

Tutorial objectives

Here are the objective with this end-to-end tutorial:

  • Provide a reference for the developers looking to convert models written in TensorFlow 1.x to their TFLite variants using the new features of the latest (v2) converter — for example, the MLIR converter, more supported ops, and improved kernels, etc.
  • Understand how to use the TFLite tools such as the Android Benchmark Tool, Model Metadata, and Codegen.
  • Guide developers on how to create a mobile application with TFLite models easily, with ML Model Binding feature from Android Studio.

Please follow along with this Colab Notebook for model saving/conversion and the Android code on GitHub here. If you are not familiar with the SavedModel format, please refer to the TensorFlow documentation for details.

U-GAT-IT

We used a Generative Adversarial Network (GAN) model proposed in this paper Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (also known as U-GAT-IT). The paper provides two generators: one that converts a selfies to anime-style image and the other one from anime to selfie. Here we only implemented the Selfie2Anime model since it better resembles the real-world scenario.

Limitations

The selfie2anime model from U-GAT-IT seems to perform well only on female faces due to the bias of the training data containing only female human faces and anime faces. One way to improve the model is to retrain the model with the human face images with diversity across ethnicity, gender and age, such as the Fairface dataset. We will leave it as an exercise for the readers.

The converted TFLite model was quantized but the model is not supported yet by the GPU delegate on Android. So you may notice a slight longer delay in model inference.

In spite of these limitations, we feel it’s still very valuable to share with everyone the end-to-end process along with the challenges we faced. Hopefully this tutorial with the sample app will help you implement your real-world applications.

Community collaboration

This tutorial was created with the great collaboration by ML GDEs and the TensorFlow Lite team. This is the first of a series of TensorFlow Lite end-to-end tutorials. Check out the awesome-tflite repo for app ideas and upcoming E2E tutorials. In addition, there you may find an awesome list of TensorFlow Lite models, samples, tutorials, tools and learning resources.

We would like to thank Khanh LeViet and Lu Wang (TensorFlow Lite team), Hoi Lam (Android ML), and Soonson Kwon ( ML GDEs Google Developer Experts Program), for their collaboration and continuous support.

Let’s get started with model saving and conversion. (Tutorial Part 2)

--

--

Margaret Maynard-Reid
Google Developer Experts

ML GDE (Google Developer Expert) | AI, Art & Design | 3D Fashion Designer