[Tensorflow Android] Quickly Create Image Editing App Like Prisma on Android

Ajeet Kumar
Sep 8, 2018 · 7 min read

Tensorflow is an open source library released by Google for machine learning. It performs exceptionally well when it comes to deep learning tasks than its other competitors. And libraries like Keras, make it very easy to learn, train and deploy neural networks for numerous tasks like image classification, NLP, etc.

Soon after the launch, a lighter version of tensorflow was announced for mobile and embedded systems like android phones, Raspberry Pi, etc.

What is the big deal?

Till now all the intelligence were concentrated on the server side and that created significant delays in many real-time requirements.

As the number of mobile users grows, so will the demand for more intelligent, personal and immersive experience on mobile. And I strongly believe that tensorflow for mobile can bridge that gap by providing fast, on-device processing of data like images, text, etc and providing a personalized experience to each individual.

Few of the areas where deep learning intelligence on mobile can improve the user’s experience are:

  1. Language Translation: With the help of camera, a user can be able to understand the food menu, traffic sign, etc.
  2. Image Classification: One of the reason searching for anything on Photos app is so fast because it internally applies deep learning techniques to capture information from the image.
  3. Augmented Reality: One of the example here could be when the user is able to see the information related to plants or seeds or any other object just by pointing camera to it. Snapchat’s filters are a great example of how engaging experiences can be created using machine learning / deep learning.

There are numerous other possibilities on which many startups around the world are working day and night.

Getting your feet wet..

So to start out on this exciting path we will be doing a simple “Hello World!”ish application, demonstrating the steps in deploying a tensorflow model on android successfully.

For this demonstration, we will be using the famous “Artistic Style Transfer” neural network which take in a simple camera image and applies the style from the selected paintings. A similar kind of behavior you can observe in the PRISMA app

Preface

The code that generated this network is available here : https://github.com/tensorflow/magenta/blob/master/magenta/models/image_stylization/model.py#L28

Its part of a broader MAGENTA project available on GitHub at given below link. Check it out for many other interesting work using tensorflow.

Note: One thing that needs to be pointed out here is that neural network cannot be deployed as-it-is on a mobile device. Owning to many hardware limitations in terms of processor, memory, etc. each network must be optimized and transformed to use smaller data types and as-less-as-possible redundant calculations.

Step 1: Create a project and import necessary library

Lets create a new project with empty activity as below.

And add the following library to build.gradle(Module:app). It will import tensorflow for android and butterknife which is a very popular and useful library for accessing UI elements.

// add tensorflow library. "+" will make it download the latest version
implementation 'org.tensorflow:tensorflow-android:+'
// add butterknife. Its a great library to access ui elements without writing findviewbyid() methods
implementation 'com.jakewharton:butterknife:8.8.1'
annotationProcessor 'com.jakewharton:butterknife-compiler:8.8.1'

Step 2: Add architecture specific JNI libraries

Copy all the folders, except stylize_quantized.pb, from the GitHub repository mentioned here

to the libs folder inside the app folder in the project. If libs folder is not there please create it. At the end the project structure should look like this.

This files are JNI libraries that our app will use. But to be recognized as same we have to make some changes to our build.gradle file.

Copy these lines of code to android{} block and sync the gradle file.

// Add these lines after copying *.so files in app/libs folder
sourceSets {
main {
jniLibs.srcDirs = ['libs']
}
}

After the sync, android project would recognize these as lib folder. If you change the view to Android in projects pane you should see something similar.

Step 3: Add the optimized Tensorflow graph

Create an assets folder and copy the stylize_quantized.pb from above mentioned GitHub repo to it.

The final structure of the project now should be similar to what is shown above.

Step 4: Create UI

For the sake of making the project simple we have chosen only 3 style to be applied. But you can modify the project to use all 26 style that this Tensorflow graph has been trained to do.

Apart from that we will have a menu button which will take the image from camera and load that into the text view. UI graphic and code are provided below.

You can refer the xml code here. To add the menu, create a resources folder inside res and name it menu. Inside menu create layout file named — main_menu.xml and paste the following code inside it.

<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
>

<item android:id="@+id/camera_image"
android:title="camera"
android:icon="@android:drawable/ic_menu_camera"
app:showAsAction="always"
/>
</menu>

Finally to the exciting bit..

MainActivity.java houses all the functionality for the app. Link to this repo is given below for reference. Out of that there are four portions of the code that are very important.

  1. Create an instance of TensorFlowInferenceInterface with the path to the tensorflow graph in onCreate(). In our case — stylize_quantized.pb.
inferenceInterface = new TensorFlowInferenceInterface(getAssets(), MODEL_FILE);
  1. Taking image from camera: Here I am opening the camera while passing the url for image. This will make sure that we get a full resolution image.
File imageFile = new File(android.os.Environment.getExternalStorageDirectory(), "temp.jpg");

fileUri = Uri.fromFile(imageFile);

Intent intent = new Intent("android.media.action.IMAGE_CAPTURE");
intent.putExtra(MediaStore.EXTRA_OUTPUT, fileUri);
startActivityForResult(intent, OPEN_CAMERA_FOR_CAPTURE);

3. **Scaling the image for the Tensorflow network**: This is an important bit. Every tensorflow network will going to be have specification on what kind of input it can take. For our case here, the network requires a 256x256 size color image to be passed. So, in onActivityForResult() method we are getting the full size image from the above mentioned path and downscale it to 256x256 resolution. The same is populated in the central imageview.

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);

if (requestCode == OPEN_CAMERA_FOR_CAPTURE && resultCode == Activity.RESULT_OK) {

try {

cameraImageView.setImageBitmap(getScaledBitmap(fileUri));
} catch (NullPointerException e) {
e.printStackTrace();
}
}
}
public Bitmap getScaledBitmap(Uri fileUri) {

Bitmap scaledPhoto = null;

try {

// Get the dimensions of the bitmap
BitmapFactory.Options bmOptions = new BitmapFactory.Options();
bmOptions.inJustDecodeBounds = true;

Bitmap bitmap = BitmapFactory.decodeFile(fileUri.getPath());

ExifInterface ei = new ExifInterface(fileUri.getPath());
int orientation = ei.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_UNDEFINED);

// if orientation is 6 or 3 then the photo taken is portrait
switch(orientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
bitmap = rotateImage(bitmap, 90);
break;
case ExifInterface.ORIENTATION_ROTATE_180:
bitmap = rotateImage(bitmap, 180);
break;
}

bmOptions.inJustDecodeBounds = false;
bmOptions.inPurgeable = true;

// Scaling down the original image taken from camera
scaledPhoto = Bitmap.createScaledBitmap(bitmap, desiredSize, desiredSize, false);

}
catch (Exception ex) {

}

return scaledPhoto;
}

4. Applying style to the image: Once the user has selected a style, he/she can press apply button to apply the selected style to the captured image.

private void stylizeImage(Bitmap bitmap) {

cameraImageView.setImageBitmap(bitmap);

bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());

for (int i = 0; i < intValues.length; ++i) {
final int val = intValues[i];
floatValues[i * 3] = ((val >> 16) & 0xFF) / 255.0f;
floatValues[i * 3 + 1] = ((val >> 8) & 0xFF) / 255.0f;
floatValues[i * 3 + 2] = (val & 0xFF) / 255.0f;
}
// Copy the input data into TensorFlow.
inferenceInterface.feed(INPUT_NODE, floatValues, 1, bitmap.getWidth(), bitmap.getHeight(), 3);
inferenceInterface.feed(STYLE_NODE, styleVals, NUM_STYLES);

// Execute the output node's dependency sub-graph.
inferenceInterface.run(new String[] {OUTPUT_NODE}, false);

// Copy the data from TensorFlow back into our array.
inferenceInterface.fetch(OUTPUT_NODE, floatValues);

for (int i = 0; i < intValues.length; ++i) {
intValues[i] =
0xFF000000
| (((int) (floatValues[i * 3] * 255)) << 16)
| (((int) (floatValues[i * 3 + 1] * 255)) << 8)
| ((int) (floatValues[i * 3 + 2] * 255));
}

bitmap.setPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());

cameraImageView.setImageBitmap(bitmap);
}

Once done, match the code once with the repo and build the project on a device.

Yay! All Done!!

Congratulations! We have successfully deployed a tensorflow graph for the Artistic Style Transfer.

GitHub repository for this project can be found here:

What from here..

You can improve upon the project to do more sensible cropping keeping the aspect ration same.

You can also try preparing a tensorflow network for mobile deployment like freezing the graph etc. apart from experimenting with numerous other tensorflow examples.


If you like this article make sure to give it a 👏. And if you would like to support me, please consider buying me a coffee :)

Digital Curry

Programming recipes covering android, kotlin, tensorflow, nodejs, vision, deep learning etc.

Ajeet Kumar

Written by

Digital Curry

Programming recipes covering android, kotlin, tensorflow, nodejs, vision, deep learning etc.