6 Steps to add Computer Vision Super Powers to your Kotlin Android App
One week ago I’ve read the article from Sara Robinson about how to add Computer Vision to an iOS app.
Adding Computer Vision to your iOS App
Recently I’ve been using the Google Cloud Machine Learning APIs with Node.js and Python, but I wondered — wouldn’t it…
It is a great article describing playing around a cool idea: develop a serverless application combining the Firebase API (Cloud Storage, Cloud Functions and Cloud Firestore) with a Cloud Service like Google Vision API, offering Machine Learning powered image recognition.
I start to thinking about a cool application using these services, but I’m an Android developer. So I tried to apply the same functionalities to an Android app developed in Kotlin language.
This app allows you to upload a picture to Firebase Cloud Storage. This will trigger a Cloud Function sending the picture to Vision API and retrieving back the info we need. These infos will be stored back in a Cloud Firestore database. Our Android application will listen for modification of the Firestore database and will update a View respectively.
Let’s see how to develop this serverless Computer Vision app in 6 steps:
1. Create the Firebase project
Using the command line tool from Firebase we can initialize a Firebase project in a dedicated directory
This will be our door in the world of Firebase power.
Then remember to choose the service you want. For our demo application we need Firestore, Functions and Storage.
2. Create the Android project
Create from Android Studio the project linked to our Firebase project. Thanks to the Firebase integration with Android Studio, this is super-easy.
3. Add the Firebase Cloud Storage Super power
Using the Assistant Tab we can link our Android project to our Firebase project. It also explain us how to use the Cloud Storage, first service we need.
In your Kotlin Activity class get the reference of the Firebase Storage
Then you have to create the code to pick up an Image from Android Filesystem.
And then you can store the image on Cloud Storage with this simple code
4. Create the Cloud Function
Now we need to write some server logic using Firebase Cloud Function. Let’s say that any time we sent the image on the Cloud Storage, we want to send this image to the Vision API.
Here the instructions to setup the Cloud function in your project. After the configuration we can use this code for index.js
In this code we are listening for change in Storage function.storage.object.onChange
When a new image is uploaded we take its URI (gcsPath) and prepare with this the request we want to perform on Vision API.
5. Use the Vision API
Vision API can extract a lot of informations about the uploaded image using Machine Learning. It’s super easy to use as parsing its JSON response.
Let’s say we want to extract only the features coming from WEB Detection and Safe Search Detection. So we need to pass this as features in our request.
Contacting Vision API is as easy as call the annotateImage method with our request.
As the JSON response arrive we want to store it in a collection (images) in the Cloud Firestore DB, so we can analyze it and save for further tasks.
6. Observe Cloud Firestore events
Coming back to our Kotlin Android app, after the upload of the image on the Storage, we will listen for events in the image Collection in the Firestore DB.
So each time the Collections will be updated (with the JSON coming from Vision API) our app will react to this event
We will extract the informations we need from the JSON response taking them from the webEntity description fields.
Then we can visualize them using a TextView, or much more expressive ways you prefer.
Try to add yours and share the result! :)
You can find the complete source code for this project on Github
Contribute to ARCalories development by creating an account on GitHub.
And…are you curious about the name of the project? Stay tuned because it will involve also ARCore…
Applause and Share for more!