Who’s afraid of Machine Learning? Part 6 : Running ML-Kit On Cloud

Intro to ML & ML-Kit for mobile developers

Britt Barak
Google Developer Experts
5 min readOct 5, 2018

--

Last post we ran the local (on device) model to classify an image. Now, it’s time to try and increase the label’s accuracy (while allowing more latency) with running a cloud based model. ☁

Jéan Béller on Unsplash

Before getting started:

If you’ve followed along last post, you’re all set and can skip to the next section!

Otherwise, you should make sure to clone this demo’s code and add Firebase and MLKit to your app. For guidance checkout the Before getting started section from last post.

As you recall, for each model we have 4 steps to implement:

0. Setting up (it’s not cheating :) doesn’t really count as a step…)

  1. Setup the Classifier
  2. Process the input
  3. Run the model
  4. Process the output

Let’s get started:

Running a cloud based model

Step 0: Setting up

Cloud based models belong to the Cloud Vision API, which you have to make sure is enabled for your project:

  1. Using a cloud-based model requires payment over a certain amount of quota. For demo and development purposes, it’s not likely that you’ll get near that quota. However, you must upgrade your Firebase project plan, so that theoretically it can be charged if needed. Upgrade your Spark plan project, which is free, to a Blaze plan, which is pay as you go, and enables you to use the Cloud Vision APIs. You can do so in the Firebase console.
  2. Enable the Cloud Vision API, on the Cloud Console API library. On the top menu, select your Firebase project, and if not already enabled, click Enable.
  • Note: for development, this configuration will do. However, prior to deploying to production, you should take some extra steps to ensure that no unauthorised calls are being made with your account. For that case, check out the instructions here.

Step 1: Setup The Classifier

Create CloudClassifier class that holds the detector object:

It’s really almost the same as last post’s LocalClassifier, except the type of the detector.

There are a few extra options we can set on the detector:

  • setMaxResults() — by default 10 results will return. If you need more than that, you’d have to specify it. On the other end, when designing the demo app I decided to only present the top 3 results. I can define it here and make the computation a little faster.
  • setModelType() — can be either STABLE_MODEL or LATEST_MODEL, the latter is default.

Step 2: Process The Input

Similarly to LocalDetector, FirebaseVisionCloudLabelDetector uses an input of FirebaseVisionImage, which we will obtain it from a Bitmap, to facilitate the UI;

Some more explanations about FirebaseVisionImage can be found on the previous post.

Step 3: Run The Model

As the previous steps, this step is incredibly similar the what we did to run the local model.

Step 4: Process The Output

As the local model is a different than the cloud based model, their outputs will be different, so that the object type we get as the response on OnSuccessListener is different per detector. Yet, they objects are quite the same to work with.

On ImageClassifier.java:

Once again, processing the results for the UI to present is according to your decision on what the UI presents. For this example:

As mentioned, FirebaseVisionCloudLabel and FirebaseVisionLabel , which is used for the local model, are different objects. They both are based on Google Knowledge Graph, and so, their APIs are the same:

  • getLabel() — a human understandable text that represents an object that is found on the image. It will always be in English.
  • getConfidence() — 0–1 float, represents the probability that the object detected in the image, indeed fits the suggested lagbel.
  • getEntityId() — if the label is found on Google Knowledge Graph , this field well return a unique Id for it, that can be further queried on by the Knowledge Graph API, to obtain a wider context of the object.

That’s pretty much it! 🎉

That’s it! let’s see some results:

That’s so cool!!

As expected, the model took a little longer to return results. However, now it can tell me which specific fruit is in the image, not just a general title. Also, it is more than 90% confident of the result, comparing to 70–80% confidence on the local model.

This tradeoff is for us, as the app developers to consider.

The code for this post can be found on the repo, on branch 2.run_cloud_model

Hope you, too, realise how simple and fun it is to use Firebase MLKit. Using the other models: face detection, barcode scanning, etc.. works a lot alike and I encourage you to try it out!

Can we get even better results? Let’s explore that using a custom model as well, on the following post. See you!!

--

--

Britt Barak
Google Developer Experts

Product manager @ Facebook. Formerly: DevRel / DevX ; Google Developer Expert; Women Techmakers Israel community lead.