Who’s afraid of Machine Learning? Part 6 : Running ML-Kit On Cloud

Intro to ML & ML-Kit for mobile developers

Britt Barak
Oct 5, 2018 · 5 min read

Last post we ran the local (on device) model to classify an image. Now, it’s time to try and increase the label’s accuracy (while allowing more latency) with running a cloud based model. ☁

Jéan Béller on Unsplash

Before getting started:

Otherwise, you should make sure to clone this demo’s code and add Firebase and MLKit to your app. For guidance checkout the Before getting started section from last post.

As you recall, for each model we have 4 steps to implement:

0. Setting up (it’s not cheating :) doesn’t really count as a step…)

  1. Setup the Classifier
  2. Process the input
  3. Run the model
  4. Process the output

Let’s get started:

Running a cloud based model

Step 0: Setting up

  1. Using a cloud-based model requires payment over a certain amount of quota. For demo and development purposes, it’s not likely that you’ll get near that quota. However, you must upgrade your Firebase project plan, so that theoretically it can be charged if needed. Upgrade your Spark plan project, which is free, to a Blaze plan, which is pay as you go, and enables you to use the Cloud Vision APIs. You can do so in the Firebase console.
  2. Enable the Cloud Vision API, on the Cloud Console API library. On the top menu, select your Firebase project, and if not already enabled, click Enable.
  • Note: for development, this configuration will do. However, prior to deploying to production, you should take some extra steps to ensure that no unauthorised calls are being made with your account. For that case, check out the instructions here.

Step 1: Setup The Classifier

public class CloudClassifier {
detector = FirebaseVision.getInstance().getVisionCloudLabelDetector();
}

It’s really almost the same as last post’s LocalClassifier, except the type of the detector.

There are a few extra options we can set on the detector:

  • setMaxResults() — by default 10 results will return. If you need more than that, you’d have to specify it. On the other end, when designing the demo app I decided to only present the top 3 results. I can define it here and make the computation a little faster.
  • setModelType() — can be either STABLE_MODEL or LATEST_MODEL, the latter is default.
public class CloudClassifier {options =
new FirebaseVisionCloudDetectorOptions.Builder()
.setModelType(FirebaseVisionCloudDetectorOptions.LATEST_MODEL)
.setMaxResults(ImageClassifier.RESULTS_TO_SHOW)
.build();
detector =
FirebaseVision.getInstance().getVisionCloudLabelDetector(options);
}

Step 2: Process The Input

Some more explanations about FirebaseVisionImage can be found on the previous post.

public class CloudClassifier {
//...
FirebaseVisionImage image;
public void execute(Bitmap bitmap) {

image = FirebaseVisionImage.fromBitmap(bitmap);
}
}

Step 3: Run The Model

public class CloudClassifier {public void execute(Bitmap bitmap, OnSuccessListener successListener, OnFailureListener failureListener) {
//...
detector.detectInImage(image)
.addOnSuccessListener(successListener)
.addOnFailureListener(failureListener);
}
}

Step 4: Process The Output

On ImageClassifier.java:

cloudClassifier = new CloudClassifier();public void executeCloud(Bitmap bitmap, ClassifierCallback callback) {    successListener = new OnSuccessListener<List<FirebaseVisionCloudLabel>>() {

public void onSuccess(List<FirebaseVisionCloudLabel> labels) {
processCloudResult(labels, callback, start);
}
};
cloudClassifier.execute(bitmap, successListener, failureListener);
}

Once again, processing the results for the UI to present is according to your decision on what the UI presents. For this example:

processCloudResult(List<FirebaseVisionCloudLabel> labels, ClassifierCallback callback) {
labels.sort(cloudLabelComparator);
resultLabels.clear();
FirebaseVisionCloudLabel label;
for (int i = 0; i < Math.min(RESULTS_TO_SHOW, labels.size()); ++i) {
label = labels.get(i);
resultLabels.add(label.getLabel() + ":" + label.getConfidence());
}
callback.onClassified("Cloud Model", resultLabels);
}

As mentioned, FirebaseVisionCloudLabel and FirebaseVisionLabel , which is used for the local model, are different objects. They both are based on Google Knowledge Graph, and so, their APIs are the same:

  • getLabel() — a human understandable text that represents an object that is found on the image. It will always be in English.
  • getConfidence() — 0–1 float, represents the probability that the object detected in the image, indeed fits the suggested lagbel.
  • getEntityId() — if the label is found on Google Knowledge Graph , this field well return a unique Id for it, that can be further queried on by the Knowledge Graph API, to obtain a wider context of the object.

That’s pretty much it! 🎉

That’s so cool!!

As expected, the model took a little longer to return results. However, now it can tell me which specific fruit is in the image, not just a general title. Also, it is more than 90% confident of the result, comparing to 70–80% confidence on the local model.

This tradeoff is for us, as the app developers to consider.

The code for this post can be found on the repo, on branch 2.run_cloud_model

Hope you, too, realise how simple and fun it is to use Firebase MLKit. Using the other models: face detection, barcode scanning, etc.. works a lot alike and I encourage you to try it out!

Can we get even better results? Let’s explore that using a custom model as well, on the following post. See you!!

Google Developers Experts

Experts on various Google products talking tech.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store