Build Your Customized Vision App in 10 Minutes

customvision.ai

CustomVision is a platform where you can build your customized image classifiers. It is offered by Microsoft. If you want to develop a mobile app where you need to classify images, CustomVision offers you one of the fastest way to go. You just simply upload your labeled images and do the training on the platform and then you can export CoreML model for iOS or TensorFlow model for Android (and even Onnx for Windows ML and DockerFile for AzureML). Yes! you heard that correctly. And it is free for 2 projects, and up to 5000 training images per project.

Now let’s build our own customized model and use it on a mobile device.

First, we need categorized images for the training. For this tutorial, I will be using Sign Language Digits Dataset which is built by high school students. So let’s download this dataset. This dataset has 10 classes consist of categorized images in separate folders.

https://github.com/ardamavi/Sign-Language-Digits-Dataset

Now go to projects page and create your first project on CustomVision.

CustomVision

Click the “New Project” and enter your project name, and choose project type as classification. For domains part, compact models are lightweight models and only this type of models can be exported as CoreML or Tensorflow. So choose general (compact) and create your project.

Now we will upload our images. Click the add images button.

Browse local files and choose the images under 0 folder. Here I recommend uploading each folder images separately because you will specify the tag of the images after uploading. So choose the 0 folder images and upload.

Add the tag of images you just uploaded and upload them.

Choose “Add Images” and upload the images in folder 1.

Tag them as “one” and upload. Do the same for the rest of the folders.

After adding and tagging all of the folders, now you are ready to train your model. Click to Train button.

Here you can change the “Probability Threshold” which defines minimum probability score for a prediction to be valid.

Training is finished in a few minutes. Here we can check how good our model is. Precision tells you: if a tag predicted by your model how likely is that to be right. Recall tells you: out of the tags which should be predicted correctly, what percentage did your model correctly find. So according to these metrics, our model is doing pretty good.

If you want to test the model yourself, just click the Quick Test button. Here you can upload your validation images and check the inference.

Click the “export model” button and choose your platform, I will be using CoreML for this tutorial.

After downloading your model. We will download the Azure sample project for iOS or Android from Github. Download the project from here for iOS or Android.

For iOS, I renamed the downloaded model as Sign.mlmodel. Just drag your model into Azure sample project and delete the Fruit.mlmodel.

Open ViewController.swift file and search for “Fruit”. Rename it as “Sign” in order to use your customized model.

For Android, drop your pb file and labels.txt file into your Android project’s Assets folder.

If you get codesigning errors like below, select your Team in the General tab of the Xcode Project.

Now build and run the project on a mobile device. Hocus Pocus!

If you don’t want to export model and just want to use your custom vision as an API. Just click the “Prediction URL” and you can see the urls for your custom vision model.

What used to take several days, now just takes minutes, the development of the machine learning tools is phenomenal.

If you liked this post and want to learn how to built everything from scratch check this post out: How to Fine-tune Resnet in Keras and Use it in an iOS App via CoreML

Thanks for reading! If you liked this story, you can follow me on Medium and Twitter. You can contact me via e-mail.