WWDC 2018: Create a machine learning model to classify images with CreateML under 10 minutes

Last week at the WWDC Apple introduced a new framework that makes the creation of machine learning models really easy and when I saw it for the first time it just blows my mind. I did not believe how simple it was.

At work, we have a small shop where we can buy snacks. We have a simple app that we made in Flutter to register the purchases. I am going to show you how to create a core ML model to automatically detect products using the iPhone camera.

Requirements

To make it work you have to run macOS Mojave and have the Xcode 10. If you don’t have it yet just go downloading it (https://developer.apple.com/download/). Has my MacBook Pro is the computer that I use every day at work I decided to create a new partition to install macOS Mojave if you don’t know how to doing it, iMore made a good tutorial about it: https://www.imore.com/how-to-partition-your-mac.

Create a model data

So let assume that you are running macOS Mojave and have Xcode 10 installed. The first thing you need to do is to collect data for your model. For our example, we want to recognize three different products. We take the products from our shelf and we start to take pictures of each in different positions, note that it is important to vary the angle and the place where you take the pictures. I asked Apple developers during WWDC labs how much pictures was enough to create a good model, they said that 50 it’s a good number to start. In my experimentations, I use 10 pictures and it already has very good results.

Organize the data — it’s easier that you think

Now that you have the data you need to organize it in order to train the model.

  1. Create a folder, we name it model
  2. Create a folder for every product that we want to detect, the folder name will be the label that is going to be detected by your model
  1. Import the pictures of the products you take and put it in the right folder
  1. There is no step 4 your model is ready to be trained.

Train your model using Xcode playground in 3 steps

  1. Open Xcode 10, create a new mac playground and write this 3 line of codes:

2. Hit the run button and don’t forget to enable the live view tab.

3. Drag and drop the folder that contains your content inside the box that appears inside the live view.

Your mac is now training your model, the duration will vary depending on the quantity of data you have in your model.

Testing the model you just created

Now that we have a trained model we need to test it, for doing this, take few more pictures of the products, than create a new folder, we will called it test_data and create subfolders names with the same labels you use with your model, and put the pictures you just take inside each folder. Finally just drag and drop the test_data folder inside the box in the playground. Xcode is testing the model we just create on the test data we just pass it and will give you the results.

You can test your model in the same place that you create it.

To finish: export the model

To export the model just drag and drop the file to your desktop or inside your Xcode project.

You can also save it to disk:

Interface to save coreML model to disk

That’s it you know now how create a Machine Learning model to classify images that can be use inside iOS and macOS apps.

Links: