Getting started with Computer Vision AI / ML — Tutorial Step 6 of 7: Deploy the model to Gravio Coordinator and subsequently to HubKit

In this tutorial, we learn how to create a system to detect if a sink contains any dishes that need washing up. This is step 6 of 7. Start with Step 1 here, or view the full tutorial on Gravio.com.

Steps to get into Machine Learning and Computer Vision

  1. Collecting images of as many different situations as possible (Details)
  2. Setting up your Google Cloud Vision account (Details)
  3. Uploading and labeling those images via the Google Vision AI website, training the Model (Details)
  4. Download the model from Google’s platform (Details)
  5. Set up Gravio Coordinator and connect HubKit to it (Details)
  6. Deploy the model to Gravio Coordinator and subsequently to HubKit 👈 YOU ARE HERE
  7. Create Actions that are being triggered based on what the camera sees (Details)

Step 6 — Deploy the model to Gravio Coordinator and subsequently to HubKit

Now that you have your models downloaded and your Gravio system set up, it’s time to deploy them on the Gravio infrastructure. Start with renaming the files:

  • model.tflite” > “SinkModelTutorial.tflite” (or anything you can remember)
  • tflite_metadata.json” > “SinkModelTutorial.json” (or anything you can remember)

The dict.txt remains the same.

On your coordinator, open the “Inference Models” tab and click on “Create” and then “TensorFlow Lite”. Note you can upload a previously exported ZIP file with all the model data but in this case, we create it:

A popup will open and you can select the files we’ve just renamed:

Select “Count” and “JSON” to get the value in JSON in Gravio. Also include DetectionValues and you can set a confidence level with which the model should trigger.

Click on “Create” and your new bespoke Model will appear:

Now, after a restart of the Gravio HubKit and a log out and log in again, this model can be deployed to the Gravio Hub. In Studio, go back to the Settings, and click “Inference Models”:

Once deployed, the model is available for application in the data layer and can be used like a sensor:

That’s it, you’re ready to create an action that is triggered when objects are detected. In the last step of this tutorial, you will learn how to create such actions.

Continue to the next Step: Create Actions that are being triggered based on what the camera sees

Join our Slack if you have questions.

Go to the full A-Z tutorial on Gravio.com

--

--