ML with Android

Kushal Dave
6 min readFeb 9, 2020

In this article, we shall see how to train a model and integrate it into the android app. If you already have an ml model and want to see how to integrate it in Android, you can scroll down to the Implementation section.

The objective of this article is to demonstrate how to train an ml model and use it in Android, and you won’t need any machine learning knowledge to follow through this article.

An ML Model is an alias for a trained (or yet to be trained) model which expects to perform some intelligent stuff, in our case we are Training it to identify some specific objects

Training a model

We are using Google’s Teachable Machine to train a model. It is a fantastic tool that allows us to train a model without requiring any knowhow of machine learning. Currently, it enables us to train models to recognize objects in images, a particular sound, or a pose for our project we are using images to recognize objects

  • Go to the Teachable Machine website here

Now for our model to recognize particular objects we are providing multiple images of that object, we can use a webcam or upload set of images, here more images we upload the more accurate result we are getting, make sure to click pictures from different positions, angles, and environments

  • Provide pictures and edit the class name with the name of the object

I have added two classes for recognizing two different cars as “Car 1” and “Car 2”.

  • once done click on train model

Once the model is Trained, we get to watch Live Preview. Now our model differentiates the two objects when placed in front of webcam, the only drawback is that it always returns us one of class value so if none of the class objects(cars in this case) placed in front of webcam is showing us the value of the first class of our model (in this case “Car 1”)

  • Click on export model besides Preview
  • In the dialog box, select “Tensorflow Lite” → “Floating point” and click “Download My Model”
  • Extracting the downloaded model gives us a “tflite” file and one “txt” file which we shall use in Android

Implementation on Android

There are 2 ways for integrating our model on Android

  1. Using Tensorflow Lite library
  2. Using Firebase ml kit

For our project we will be using Firebase ml kit as

  • It is easy to setup
  • Model can be hosted on firebase and also bundled with app
  • We can update our model without updating the application

Lets get Started

  • Create a new project on android studio
  • Go to tools -> firebase
  • In firebase navigation drawer go to ML KIT and select any one of the 4 options
  • Click on Connect to Firebase , login with firebase and select create new Firebase project
  • Once project is created click on Add ML Kit to your app , accept changes …
  • In app level Gradle file add this dependency
implementation 'com.google.firebase:firebase-ml-model-interpreter:22.0.1'
  • In the same file add this to make sure local model file is not compressed
android{
...
aaptOptions {
noCompress "tflite" // Your model's file extension: "tflite", "lite", etc.
}
...}

and with that we have successfully setup firebase for our app

Now in order to use it we will have to add our model(.tflite file) and label file to assets folder

  • Create assets folder by right clicking on app → new → folder → assets folder
  • Copy paste both files in assets folder
  • Update activity_main.xml file
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">

<TextView
android:id="@+id/tvIdentifiedItem"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Loading..."
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toTopOf="parent" />

</androidx.constraintlayout.widget.ConstraintLayout>
  • Add an image for one of the object (in our case car 2) in drawable folder
  • Add this code in MainActivity.kt
package com.kdtech.mlwithandroid

import android.graphics.Bitmap
import android.graphics.BitmapFactory
import android.graphics.Color
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import com.google.firebase.ml.custom.*
import kotlinx.android.synthetic.main.activity_main.*
import java.io.BufferedReader
import java.io.InputStreamReader

class MainActivity : AppCompatActivity() {

override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
//for loading model from local assets folder
val localModel = FirebaseCustomLocalModel.Builder()
.setAssetFilePath("model_unquant.tflite")
.build()

val options = FirebaseModelInterpreterOptions.Builder(localModel).build()

val interpreter = FirebaseModelInterpreter.getInstance(options)

val inputOutputOptions = FirebaseModelInputOutputOptions.Builder()
.setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 224, 224, 3))
.setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 2))
// here replace 2 with no of class added in your model , for //production apps you can read the labels.txt files here and to get //no of classes dynamically
.build()


/* Here we are using static image from drawable to keep the code minimum and avoid distraction, Recommended method would be to get the image from user by camera or device photos using the same code by handling all this logic in a method and calling that every time */
val bitmap = Bitmap.createScaledBitmap(
BitmapFactory.decodeResource(
resources,
R.drawable.car2), 224, 224, true)

val batchNum = 0
val input = Array(1) { Array(224) { Array(224) { FloatArray(3) } } }
for (x in 0..223) {
for (y in 0..223) {
val pixel = bitmap.getPixel(x, y)
// Normalize channel values to [-1.0, 1.0]. This requirement varies by
// model. For example, some models might require values to be normalized
// to the range [0.0, 1.0] instead.
input[batchNum][x][y][0] = (Color.red(pixel) - 127) / 255.0f
input[batchNum][x][y][1] = (Color.green(pixel) - 127) / 255.0f
input[batchNum][x][y][2] = (Color.blue(pixel) - 127) / 255.0f
}
}

val inputs = FirebaseModelInputs.Builder()
.add(input) // add() as many input arrays as your model requires
.build()
interpreter!!.run(inputs, inputOutputOptions)
.addOnSuccessListener { result ->
// ...
val output = result.getOutput<Array<FloatArray>>(0)
val probabilities = output[0]
val reader = BufferedReader(
InputStreamReader(assets.open("labels.txt"))
)
var higherProbablityFloat = 0F
for (i in probabilities.indices) {

if (higherProbablityFloat<probabilities[i]){
val label = reader.readLine()
higherProbablityFloat = probabilities[i]
tvIdentifiedItem.text = "The Image is of ${label.substring(2)}"
}
}
}
.addOnFailureListener { e ->
// Task failed with an exception
// ...
tvIdentifiedItem.text = "exception ${e.message}"
}
}
}

Here we are using static image from drawable to keep the code minimum and avoid distraction, Recommended method would be to get the image from user by camera or device photos using the same code by handling all this logic in a method and calling that every time

  • Run the app :)

With this we have done the local ml integration part you can find all the code on github here

  • Now for a combination of remote and local integration for ml kit , we will need to add internet permission and make few changes to the code check this branch

And Lastly lets not forget to thank developers of Teachable machine and Firebase team for their amazing products , they wrote thousands of lines of code which has enabled us to train and use our models in this few lines of code

Thank you for reading! Feel free to say hi or share your thoughts on Twitter @that_kushal_guy or in the responses below!

You can checkout iOS variant of this article here

Also checkout my other article on using Jetpack Compose(Declarative ui) with MVVM architecture

--

--