Creating Core ML machine learning model with dataset and integration in an iOS Application.

Naresh
Engineering Jio
Published in
4 min readAug 26, 2020

When we talk about machine learning in iOS then Core ML comes to our mind so let’s try to understand in this post how can we accomplish our machine learning tasks in iOS App.

Core ML is an Apple framework to integrate machine learning models into our app. Core ML provides a unified representation for all models. With CoreML we can perform predictions on the user’s device.

A ML model is a result of applying algorithm on a dataset to predict result.

Since there are many ML tasks such as classification, similarity check, predictions, regressions, etc. We will talk about image classification here which means our model will take an image as input and predict its type or category based on our training algorithm and dataset provided.

Here are the few Core ML inbuilt or converted models shared by apple that can also be directly used in our iOS App.

https://developer.apple.com/machine-learning/models/

We will not use the inbuilt model since our goal is to create our own model here with our own intent and algorithm.

We will create our own ML model to classify grocery images and identify the name of grocery items with higher possibility.

There are many ways and approaches to create ML models such as Keras, TensorFlow, Caffe model, etc but in iOS (.mlmodel) is the supported format type of a Core ML model so we will use TuriCreate to create our desired ml model with less effort and efficiency.

Also if we want to use any other approach then apple also provides cormel tools to convert any type of model into the core ml model i.e (.mlmodel).

We have used a custom dataset of around 30 grocery items to predict the grocery class and it is mandatory for any ML model to make predictions. It plays a vital role when we talk about the accuracy of ML model.

It simply means Stronger the dataset, more accurate is our ML model.

Note: We should have at least 200 images of each category with different orientations in our dataset to achieve desirable or more accurate predictions.

Now we have a basic understanding of our approach and what we need to achieve, We will begin by installing TuriCreate now to make our idea work.

It is fast, flexible, and ready to deploy when we talk about image classification or any related ML task.

Requirements :

  • Python 2.7, 3.5, 3.6, 3.7​
  • x86_64 architecture​
  • At least 4 GB of RAM​

With MacBook we already have our requirements met with python 2.7, It is recommended to use a virtual environment.

Use $ pip install turicreate to install it on your system.

Folders Structure

Structuring data :

We’ll structure them based on what they represent. For example, all of the images of one type will be in one folder.

Training our model :

Now we have already structured our dataset so we will now begin with training our model with a simple python file which will be created in parallel to the images folder as shown above.

Create a python file inside our classifier folder, alongside the images folder ​

$ touch filename.py​

Next step will be to import turicreate and load our images​

  • import turicreate
  • data = turicreate.image_analysis.load_images(“images/”)​

Now we will link the folder names to a label name. Same label names will be returned in our model when it’s being used for prediction in our app.

  • data[“label”] = data[“path”].apply(lambda path: os.path.basename(os.path.dirname(path)))

Now we will need a fast tabular data structure to process our images which is Sframe in python.

Lets again take a look on our script and understand the basics :

import turicreate as tc​​

# Load data ​

data = tc.SFrame(‘grocery.sframe’)​

# Create a model​

model = tc.image_classifier.create(data, target=’grocery’)​

# Make predictions​

predictions = model.predict(data)​

# Export to Core ML​

model.export_coreml(‘MyImageClassify.mlmodel’)

Inside classifier folder, Run the python script in terminal and we will get our coreml model (.mlmodel) in some time to use in our iOS Application.​

Integration of our model in iOS App :

Simply drag and drop your core ML model into your iOS Xcode project.​

Instantiate your ML model in view controller and make prediction.​

Let model = NMIndianGroceryModel()​

Make Prediction :​

// It takes input image in pixel buffer format​

let output = try? model.prediction(image: buffer) {​

let objectName: String = output.label​

outputText = objectName​

}​

We will get the image type as string output and can use it for our purpose to classify images.

Here is the simulator output to show our idea is now converted to reality.

This completes our image recognition using a custom Core ML model in iOS App. ​​​​​​​

--

--