On device Machine Learning in iOS using Core ML, Swift, Neural Engine

Sai Balaji
Jun 2, 2020 · 7 min read

Introduction

Core ML is a Machine Learning Library launched by Apple in WWDC 2017.

It allows iOS developers to add real-time, personalized experiences with industry-leading, on-device machine learning models in their apps by using Neural Engine.

A11 Bionic Chip Overview

Internals of A11 Bionic Chip

No of Transistors: 4.3B transistors

Number of Cores:6 ARM cores (64bit) –2 fast (2.4GHz) — 4 low energy

Number of Graphical Processing Unit:3 custom GPUs

Neural Engine –600 Bops

Apple introduced A11 Bionic Chip with Neural Engine on September 12, 2017. This neural network hardware can perform up to 600 Basic Operations per Second(BOPS) and is used for FaceID, Animoji and other Machine Learning tasks. Developers can take advantage of the neural engine by using Core ML API.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption.

Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.

Core ML is the foundation for domain-specific frameworks and functionality. Core ML supports Vision for analyzing images, Natural Language for processing text, Speech for converting audio to text, and Sound Analysis for identifying sounds in audio.

Core ML API

We can easily automate the task of building machine learning models which include training and testing of the model by using Playground and integrate the resulting model file to our iOS project.

Starter Tip📝 In machine learning classification problems have discrete labels.

Outline of Core ML

Well. What we are going to build?

In this tutorial, we are going to see how to build an image classifier model using Core ML which can classify Orange and Strawberry images and add the model to our iOS application.

Image classifier model

Starter Tip📝: Image classification comes under supervised machine learning task in which we use labeled data ( in our case label is image name)

Prerequisites:

  • Swift 💖language proficiency
  • iOS development basics
  • Object Oriented Programming concepts

Application Programs:

  • X-code 10 or later
  • iOS 11.0+ SDK
  • macOS 10.13+

Gathering Data set

When gathering data set for image classification make sure you follow the below guidelines recommended by Apple.

  • Aim for a minimum of 10 images per category — the more, the better.
  • Avoid highly unbalanced datasets by preparing a roughly equal number between categories.
  • Make your model more robust by enabling the Create ML UI’s Augmentation options: Crop, Rotate, Blur, Expose, Noise, and Flip.
  • Include redundancy in your training set: Take lots of images at different angles, on different backgrounds, and in different lighting conditions. Simulate real-world camera capture, including noise and motion blur.
  • Photograph sample objects in your hand to simulate real-world users that try to classify objects in their hands.
  • Remove other objects, especially ones that you’d like to classify differently, from view.

Once you have gathered your Data Set make sure that you split the data set as a training data set and testing data set and place them in their respective directory 📁

IMPORTANT NOTE ⚠ : Make sure you place the respective images in their corresponding directory inside the test directory.Because of the folder name act as Label for our images.

In our case we have two directories each having respective images

Building a Model 🔨⚒

Don’t panic! Apple has made this task much more simpler by automating major tasks.

With Core ML you can use an already trained machine learning model or build your own model to classify input data. The Vision framework works with Core ML to apply classification models to images, and to pre-process those images to make machine learning tasks easier and more reliable.

Just follow the below steps.

STEP 1: Pull open your X-code 🛠

STEP 2:Create a Blank Swift Playground

STEP 3: Clear the default code and add the following program and run the playground.

Description :

Here we open the default model builder interface provided by the Xcode.

STEP 4: Drag the train directory into the training area.

Place the train directory in the training area denoted by dotted lines

Starter Tip 📝: We can also provide custom name to our model by clicking the down arrow in the training area

Step 5: Xcode will automatically process the image and start the training process. By default, the training takes 10 iterations time taken to train the model depend upon your Mac specs and Data set size. You can see the training progress in the Playground terminal window.

Waiting for my model to train.

STEP 6: Once training is completed you can test your model by dragging the Test directory into the testing area. Xcode automatically Test your model and display the result.

Here you can see that our model has classified the images accurately 😎.

STEP 7: Save 💾 your model.

iOS App integration:

Step 1: Pull open your X-code 🛠.

Step 2: Create a Single Page iOS application 📱.

STEP 3: Open up the project navigator 🧭.

STEP 4: Drag and drop the trained model into the project navigator.

Place your Model in Project navigator

STEP 5: Open up Main.storyboard and create a simple interface as shown below add the IBOutlets and IBActions for corresponding views.

Place UIImageView, UIButtons and UILabels

STEP 6: Open ViewController.swift file and add the following code as an extension.

Description: Here we create an extension for our ViewController class and implement UINavigationControllerDelegate and UIImagePickerControllerDelegate to pop an UIImagePickerView when user clicks PickImage UIButton. Make sure you set the delegate with context.

Steps Involved Accessing the Core ML Model in iOS App

Step 1: Make sure you import the following libraries.

Step 2: Create an instance for our Core ML model class.

Step 3: To make the Core ML to perform classification we should first create a request of type VNCoreMLRequest (VN stands for Vision👁)

STEP 4: Make sure you crop the image so that it is compatible with the core ml model.

STEP 5: Place the above codes inside an user defined function which returns the request object.

STEP 6: Now we should convert our UIImage to CIImage (CI:CoreImage) so that it can be used as an input for our core ml model.It can be done easily by creating an instance for CIImage and passing UIImage in the constructor.

STEP 7:Now we can handle our VNCoreMLRequest by creating a request handler and passing the ciImage.

STEP 8:The request can be excecuted by calling perform() method and passing the VNCoreMLRequest as the parameter.

Description: DispatchQueue is an object that manages the execution of tasks serially or concurrently on your app’s main thread or on a background thread.

STEP 10: Place the above code in an user defined function as shown below.

STEP 11:Create a user defined function called handleResult() which takes VNRequest object and error object as parameters this function will be called when VNCoreMLRequest has been completed.

Note 📓: DispatchQueue.main.async is used to update the UIKit objects (in our case it is UILabel) using UI Thread or Main Thread because all the classification tasks are done in background thread.

Entire ViewController.Swift code

All set!

Now fire up 🔥 your Simulator and test the app.

Note 📝:Make sure you have some oranges 🍊 and strawberries🍓 pictures in photo library of your Simulator

Click the Pick Image button
Select any image
Click Classify button
Select another picture and click Classify

Hats off 🎉🙌🥳:

You have built your first iOS app using Core ML.

Mac O’Clock

The best stories for Apple owners and enthusiasts

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store