Who’s afraid of Machine Learning? Part 5 : Running ML-Kit On Device

Intro to ML & ML-Kit for mobile developers

Britt Barak
Oct 5, 2018 · 5 min read

Last posts gave an intro to ML, to MLKit, and discussed why do we need a mobile specific solution for ML capabilities.

Now… Time to write some code!

Before getting started:

3. Add firebase-ml-vision library to your app: on your app-level buid.gradle file add:

dependencies {
// …

implementation ‘com.google.firebase:firebase-ml-vision:17.0.0’
}

firebase-ml-vision library supports all the logic needed for all of the MLKit out-of-the-box use cases that has to do we vision (which are all of those that are currently available, and were outlined on a previous post)

As mentioned, we’ll use a local, an on device and a custom detector. Each has 4 steps:

0. Setting up (it’s not cheating :) doesn’t really count as a step…)

Running a local (on-device) model

Choosing a local model, is the lightweight and offline supported option. In return, its accuracy in limited, which we must take into account.

The UI takes the bitmap → calls ImageClassifier.executeLocal(bitmap)ImageClassifier calls LocalClassifier.execute()

Step 0: Setting up

On your app-level build.gradle file add:

dependencies {
// ...
implementation 'com.google.firebase:firebase-ml-vision-image-label-model:15.0.0'
}

Optional, but recommended: by default, the ML model itself will be downloaded only once you execute the detector. It means that there will be some latency at the first execution, as well as network access required. To by-pass that, and have the ML model downloaded as the app is installed from Play Store, simply add the following declaration to your app’s AndroidManifest.xml file:

<application ...>
...
<meta-data
android:name="com.google.firebase.ml.vision.DEPENDENCIES"
android:value="label" />
<!-- To use multiple models: android:value="label,barcode,face..." --></application>

Step 1: Setup the Classifier

Create LocalClassifier class that holds the detector object:

public class LocalClassifier {    detector = FirebaseVision.getInstance().getVisionLabelDetector();
}

This is the basic detector instance. You can be more picky about the output returned, and add Confidence Threshold , which is between 0–1, with 0.5 as a default.

public class LocalClassifier {    FirebaseVisionLabelDetectorOptions localDetectorOptions =
new FirebaseVisionLabelDetectorOptions.Builder()
.setConfidenceThreshold(ImageClassifier.CONFIDENCE_THRESHOLD)
.build();
private FirebaseVisionLabelDetector classifier = FirebaseVision.getInstance().getVisionLabelDetector(localDetectorOptions);}

Step 2: Process The Input

FirebaseVisionLabelDetector knows how to work with an input of type FirebaseVisionImage. You can obtain a FirebaseVisionImage instance from either:

Since we work with a Bitmap, the input processing is done simply as such:

class LocalClassifier {
//...

FirebaseVisionImage image;
public void execute(Bitmap bitmap) {
image = FirebaseVisionImage.fromBitmap(bitmap);
}
}

Step 3: Run The Model

This is where the magic happens! 🔮 Since the model does take some computation time, we should have the model run asynchronously, and return the success or failure result using listeners.

public class LocalClassifier {    //...    public void execute(Bitmap bitmap, OnSuccessListener     successListener, OnFailureListener failureListener) {
//...
detector.detectInImage(image)
.addOnSuccessListener(successListener)
.addOnFailureListener(failureListener);
}
}

Step 4: Process The Output

The detection output is provided on OnSuccessListener. I prefer to have the OnSuccessListener passed to LocalClassifier from ImageClassifier, that handles the communication between the UI and LocalClassifier.

The UI calls ImageClassifier.executeLocal() , which should look something like that:

OnImageClassifier.java :

localClassifier = new LocalClassifier();public void executeLocal(Bitmap bitmap, ClassifierCallback callback) {    successListener = new OnSuccessListener<List<FirebaseVisionLabel>>() {        public void onSuccess(List<FirebaseVisionLabel> labels) {
processLocalResult(labels, callback, start);
}
}; localClassifier.execute(bitmap, successListener, failureListener);}

processLocalResult() just prepares the output labels to display in the UI.

In my specific case, I chose to display the 3 results with highest probability. You may choose any other format type. To complete the picture, this is my implementation:

OnImageClassifier.java :

void processLocalResult(List<FirebaseVisionLabel> labels, ClassifierCallback callback) {    labels.sort(localLabelComparator);    resultLabels.clear();    FirebaseVisionLabel label;    for (int i = 0; i < Math.min(3, labels.size()); ++i) {        label = labels.get(i);        resultLabels.add(label.getLabel() + “:” + label.getConfidence());    }    callback.onClassified(“Local Model”, resultLabels);}

ClassifierCallback is a simple interface I created, in order to communicate the results back to the UI to display. We could have, of course use any other approach.

interface ClassifierCallback {
void onClassified(String modelTitle, List<String> topLabels);
}

That’s it!

you used your first ML model to classify an image! 🎉 How simple was that?!

Let’s run the app and see some results!

Pretty good!!! We got some general labels like “food” or “fruit”, that definitely fit the image, but I’d expect the model to me able to tell me which fruit is it..

Get the final code for this part on this demo’s repo , on branch 1.run_local_model

Next up: let’s try to get some more indicative and accurate labels, by using the cloud based detector… on the next post!

Google Developers Experts

Experts on various Google products talking tech.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store