Who’s afraid of Machine Learning? Part 6 : Running ML-Kit On Cloud

Intro to ML & ML-Kit for mobile developers

Britt Barak
Oct 5, 2018 · 5 min read

Last post we ran the local (on device) model to classify an image. Now, it’s time to try and increase the label’s accuracy (while allowing more latency) with running a cloud based model. ☁

Image for post
Image for post
Jéan Béller on Unsplash

Before getting started:

If you’ve followed along last post, you’re all set and can skip to the next section!

Otherwise, you should make sure to clone this demo’s code and add Firebase and MLKit to your app. For guidance checkout the Before getting started section from last post.

As you recall, for each model we have 4 steps to implement:

0. Setting up (it’s not cheating :) doesn’t really count as a step…)

Let’s get started:

Running a cloud based model

Step 0: Setting up

Cloud based models belong to the Cloud Vision API, which you have to make sure is enabled for your project:

Image for post
Image for post
  • Note: for development, this configuration will do. However, prior to deploying to production, you should take some extra steps to ensure that no unauthorised calls are being made with your account. For that case, check out the instructions here.

Step 1: Setup The Classifier

Create CloudClassifier class that holds the detector object:

public class CloudClassifier {
detector = FirebaseVision.getInstance().getVisionCloudLabelDetector();
}

It’s really almost the same as last post’s LocalClassifier, except the type of the detector.

There are a few extra options we can set on the detector:

  • setMaxResults() — by default 10 results will return. If you need more than that, you’d have to specify it. On the other end, when designing the demo app I decided to only present the top 3 results. I can define it here and make the computation a little faster.
  • setModelType() — can be either STABLE_MODEL or LATEST_MODEL, the latter is default.
public class CloudClassifier {options =
new FirebaseVisionCloudDetectorOptions.Builder()
.setModelType(FirebaseVisionCloudDetectorOptions.LATEST_MODEL)
.setMaxResults(ImageClassifier.RESULTS_TO_SHOW)
.build();
detector =
FirebaseVision.getInstance().getVisionCloudLabelDetector(options);
}

Step 2: Process The Input

Similarly to LocalDetector, FirebaseVisionCloudLabelDetector uses an input of FirebaseVisionImage, which we will obtain it from a Bitmap, to facilitate the UI;

Some more explanations about FirebaseVisionImage can be found on the previous post.

public class CloudClassifier {
//...
FirebaseVisionImage image;
public void execute(Bitmap bitmap) {

image = FirebaseVisionImage.fromBitmap(bitmap);
}
}

Step 3: Run The Model

As the previous steps, this step is incredibly similar the what we did to run the local model.

public class CloudClassifier {public void execute(Bitmap bitmap, OnSuccessListener successListener, OnFailureListener failureListener) {
//...
detector.detectInImage(image)
.addOnSuccessListener(successListener)
.addOnFailureListener(failureListener);
}
}

Step 4: Process The Output

As the local model is a different than the cloud based model, their outputs will be different, so that the object type we get as the response on OnSuccessListener is different per detector. Yet, they objects are quite the same to work with.

On ImageClassifier.java:

cloudClassifier = new CloudClassifier();public void executeCloud(Bitmap bitmap, ClassifierCallback callback) {    successListener = new OnSuccessListener<List<FirebaseVisionCloudLabel>>() {

public void onSuccess(List<FirebaseVisionCloudLabel> labels) {
processCloudResult(labels, callback, start);
}
};
cloudClassifier.execute(bitmap, successListener, failureListener);
}

Once again, processing the results for the UI to present is according to your decision on what the UI presents. For this example:

processCloudResult(List<FirebaseVisionCloudLabel> labels, ClassifierCallback callback) {
labels.sort(cloudLabelComparator);
resultLabels.clear();
FirebaseVisionCloudLabel label;
for (int i = 0; i < Math.min(RESULTS_TO_SHOW, labels.size()); ++i) {
label = labels.get(i);
resultLabels.add(label.getLabel() + ":" + label.getConfidence());
}
callback.onClassified("Cloud Model", resultLabels);
}

As mentioned, FirebaseVisionCloudLabel and FirebaseVisionLabel , which is used for the local model, are different objects. They both are based on Google Knowledge Graph, and so, their APIs are the same:

  • getLabel() — a human understandable text that represents an object that is found on the image. It will always be in English.
  • getConfidence() — 0–1 float, represents the probability that the object detected in the image, indeed fits the suggested lagbel.
  • getEntityId() — if the label is found on Google Knowledge Graph , this field well return a unique Id for it, that can be further queried on by the Knowledge Graph API, to obtain a wider context of the object.

That’s pretty much it! 🎉

That’s it! let’s see some results:

Image for post
Image for post

That’s so cool!!

As expected, the model took a little longer to return results. However, now it can tell me which specific fruit is in the image, not just a general title. Also, it is more than 90% confident of the result, comparing to 70–80% confidence on the local model.

This tradeoff is for us, as the app developers to consider.

The code for this post can be found on the repo, on branch 2.run_cloud_model

Hope you, too, realise how simple and fun it is to use Firebase MLKit. Using the other models: face detection, barcode scanning, etc.. works a lot alike and I encourage you to try it out!


Can we get even better results? Let’s explore that using a custom model as well, on the following post. See you!!

Google Developers Experts

Experts on various Google products talking tech.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store