Discovery of Firebase ML Kit image labeling

Since it’s announcement at IO back in May, I wanted to try the new Firebase ML Kit tools.

I had the opportunity to try it during my summer holidays.

This is my first try with the label detection provided by ML Kit. One of the coolest thing is that you can use it both online and offline.

The app I made basically takes an image URL and labels it online or offline. Here’s how I created the app.

Create your project in Firebase & activate Google Cloud Vision API.

Go to the Firebase console create a new project and follow the steps to add it to Android.

Firebase project dashboard

Change the pricing to blaze, this is needed in order to use Cloud Recognition (you have 1000 uses free per month).

Firebase pricings

Activate Cloud Vision API for your project by following this link.

Cloud Vision API

Add ML Kit vision API to your project

In the build.gradle of the app in Android Studio, add the following dependencies.

implementation ‘’ implementation ‘’ implementation ‘’

Create the layout

Next, we create a layout with :

  • An TextInputLayout in which we’ll copy an URL
  • An ImageView, used to show the image
  • A Button to load the image at the given URL
  • A Button for local labeling
  • A Button for cloud labeling
The layout

Add the method for local labeling

fun runImageLabelingOnDevice(bitmap : Bitmap){        
val image = FirebaseVisionImage.fromBitmap(bitmap)
val labelDetector = FirebaseVision.getInstance().visionLabelDetector
.addOnSuccessListener {

First, we create a FirebaseVisionImage object with the bitmap we want to label.

Then, we create a VisionLabelDetector and we use its detectInImage() method to find labels in the image

Add the method for cloud labeling

fun runImageLabelingCloud(bitmap : Bitmap){        
val image = FirebaseVisionImage.fromBitmap(bitmap)
val options = FirebaseVisionCloudDetectorOptions.Builder() .setModelType(FirebaseVisionCloudDetectorOptions.LATEST_MODEL) .setMaxResults(15)
val labelDetector = FirebaseVision.getInstance().getVisionCloudLabelDetector(options)

The cloud method is nearly identical except for these two things :

  • We need a VisionCloudLabelDetector instead of a VisionLabelDetector
  • We need to pass that detector some options, how much results we want to display and what kind of model we want to use (in this case : FirebaseVisionCloudDetectorOptions.LATEST_MODEL)

Test the app !

The app running !

That’s it for my first try with ML Kit (and first story on Medium). I hope you guys liked this article. Comments and suggestions are welcome.