Using AutoML Vision in your next Flutter App
This exercise to build a Flutter App that can solve an image classification problem using AutoML.
AutoML is a tool on the Google Cloud Platform that allows anyone without machine learning expertise to build machine learning models without having to write any code.
For this exercise we will create a model for AWS Lambda Icon identification, the model will be trained with set of images and the end result would predict if the given image is a Lambda Icon or not.
The complete bundle for AWS Architecture Icons can be downloaded from https://aws.amazon.com/architecture/icons/
The Flutter App will have a single button which will allow to capture the images from camera and send it to the server for prediction.
Prerequisite: We will need a active Google Cloud Account and fair knowledge of flutter.
Our goals
- Setup AutoML and the prediction model for the AWS Lambda image.
- Testing the AutoML prediction model over REST API.
- Wiring the REST API with our Flutter App.
Training the Model
Setup AutoML and the prediction model for the AWS Lambda image
We will use the AutoML Vision that allow us to create our own custom machine learning models. AutoML Visions has an easy-to-use graphical interface to upload images and train custom image models with AutoML and we will be have a REST API interface to generate the predictions.
Login to your Google Cloud Platform (GCP) and click on the Vision Dashboard, select the Google Cloud Project to be used for Vision we need to
1. Enable Billing
2. And Enable the required API’s and grant permissions
Click on the prompts to complete the above setup. Once done with the setup we will see the Auto Vision Dashboard
Click on the Create a New Dataset, enter the Dataset Name. We can import the images later and we will also need multi-label classification option selected.
Once the date set is created we will be in the dataset dashboard, the dataset dashboard has 4 tabs - Image, Train, Evaluate and Predict.
In the Images tab, we will upload the training images. A minimum of 10 images per label is required else we will not be able to train our model. The recommendation is to have about 100 images to get more accurate results. We will create two label models as
- Lamdba
- NotLambda
For lambda label we will upload all the Lambda related Images and random icons samples for NonLamdba label.
Now, it’s time to train the model. Simply click on the Train tab and then click on Start Training.
The training job may take a few minutes after which we will be able to test the model for accuracy. Once it is done, click on the Evaluate tab, where our results are shown and gives us more details about the labels themselves and how well the model will be able to predict for a specific label.
To perform predictions, click the Predict tab it gives us a nice web interface for us to test our model with images, and displays the results, Upload a random icon image to test the model.
We are done with the first part on setting-up AutoML,lets now see how to test the model over the REST API.
Test the AutoML model over REST API
Note : As we are much interested on the Flutter App, we have skipped most of the best, secure practices in setting up the servers. In most case it must be secure enough.
Our server setup : An Ubuntu VM on Google cloud, running Apache + PHP
Create a VM instance (the least Machine Type would do f1-micro 1 vCPU, 0.6 GB memory) with http and https traffic and full access to all Cloud APIs allowed.
Setting up a User/Service Account with Vision API access
From the IAM dashboard create a new user lets name the account as automlflutter with full access to AutoML Vision.
Next lets create a service account that would have access to AutoML vision API from the IAM & Admin dashboard select the Service account menu for your project and click on the create service account
Lets have full permission for the service account
Next select the create key option to generate a JSON KEY save the JSON file downloaded, this is required to authenticate the service account.
Now lets test the CURL example provided in the Predict tab.
From the Google cloud console SSH into the instance.
We need the JSON key on the instance,copy the contents of the json file to the server, from the terminal run vi key.json in vi editor copy the contents of the json file from your system here save and exit vi.
with the JSON key file in place lets activate our service account to be used by the google cloud SDK, from terminal run
gcloud auth activate-service-account — key-file key.json
gcloud auth list #To check its the account is set as you active service account.
Next we need to build our sample request.json file, that will be posted to the AutoML API.
The imageBytes is Base64 image string value use https://www.base64encode.net/ to convert the image to Base64. Create the request.json file with the image byte.
Now execute the CURL example provided from your VM instance.
curl -X POST -H “Content-Type: application/json” \
-H “Authorization: Bearer $(gcloud auth application-default print-access-token)” \
https://automl.googleapis.com/v1beta1/projects/quizappflutter/locations/us-central1/models/ICNXXXXXXXXXXXXXXXXXXXXXXXX:predict -d @request.json
If you did not miss any step and the request.json file is correct you will see the response for the the REST API as show below.
Congrats! we have successfully complete the initial setup for the REST API. We don’t want the REST API to be directly called from our Flutter App as this would expose the Auth tokens.
We will have a PHP script that would acts like a simple proxy to handle the request and response between the Flutter App and AutoML REST service.
From the VM terminal run the below commands to install to install Apache + PHP.
sudo apt update
sudo apt install apache2sudo apt install php libapache2-mod-php
sudo systemctl restart apache2
curl ifconfig.me — to get VMs public IP
Lets do some PHP
cd /var/www/html/ and create a automl.php file
Next lets get our PHP code to accept the base64 image and the response will be as json to be consumed in Flutter.
In the PHP script to get the access token we are using shell_exec and for www-data user to have access to gcloud to execute the command we need to provide www-data user with the required permissions to execute. NOTE : THIS SHOULD NOT BE DONE IN PRODUCTION, you should use the google cloud client library to talk to Vision API.
https://github.com/googleapis/google-cloud-php
For www-data user with exec permission run sudo visudo and add the following lines to end of the file, save and exit.
#includedir /etc/sudoers.d
www-data ALL=(ALL) NOPASSWD: ALL
We are done with the PHP code the last part of our exercise is to wire up API with the flutter app
Wiring the REST API with our Flutter App.
Create Flutter app using, flutter create awsquizsolver
pubspec.yaml dependencies to be included
image_picker: ^0.6.0+2
http: ^0.12.0+2
The image picker plugin will allow to take new image from our device camera, To invoke the camera just include the package and to call the ImagePicker.pickImage function
var image = await ImagePicker.pickImage(source: ImageSource.camera);
And to convert the image to Base64 to be posted to server, the below code snippet will do that for us, update your VM’s Public IP for the HTTP Post
List<int> imageBytes = image.readAsBytesSync();
String base64Image = base64.encode(imageBytes);
Map<String, String> headers = {“Accept”: “application/json”};
Map body = {“image”: base64Image};
var response = await http.post(‘http://YourVM PUBLIC IP/automl.php',
body: body, headers: headers);
print(response.body);
The Complete code for the Flutter App can be found in the Git Repo
The Flutter App would look like
Do post back if you have any queries.
Get in Touch!