On device Machine Learning in iOS using Core ML, Swift, Neural Engine

Sai Balaji
Mac O’Clock
Published in
9 min readJun 2, 2020

--

Introduction

Core ML is a Machine Learning Library launched by Apple in WWDC 2017.

It allows iOS developers to add real-time, personalized experiences with industry-leading, on-device machine learning models in their apps by using Neural Engine.

A11 Bionic Chip Overview

Internals of A11 Bionic Chip

No of Transistors: 4.3B transistors

Number of Cores:6 ARM cores (64bit) –2 fast (2.4GHz) — 4 low energy

Number of Graphical Processing Unit:3 custom GPUs

Neural Engine –600 Bops

Apple introduced A11 Bionic Chip with Neural Engine on September 12, 2017. This neural network hardware can perform up to 600 Basic Operations per Second(BOPS) and is used for FaceID, Animoji and other Machine Learning tasks. Developers can take advantage of the neural engine by using Core ML API.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption.

Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.

Core ML is the foundation for domain-specific frameworks and functionality. Core ML supports Vision for analyzing images, Natural Language for processing text, Speech for converting audio to text, and Sound Analysis for identifying sounds in audio.

Core ML API

We can easily automate the task of building machine learning models which include training and testing of the model by using Playground and integrate the resulting model file to our iOS project.

Starter Tip📝 In machine learning classification problems have discrete labels.

Outline of Core ML

Well. What we are going to build?

In this tutorial, we are going to see how to build an image classifier model using Core ML which can classify Orange and Strawberry images and add the model to our iOS application.

Image classifier model

Starter Tip📝: Image classification comes under supervised machine learning task in which we use labeled data ( in our case label is image name)

Prerequisites:

  • Swift language proficiency
  • iOS development basics
  • Object Oriented Programming concepts

Application Programs:

  • X-code 10 or later
  • iOS 11.0+ SDK
  • macOS 10.13+

Gathering Data set

When gathering data set for image classification make sure you follow the below guidelines recommended by Apple.

  • Aim for a minimum of 10 images per category — the more, the better.
  • Avoid highly unbalanced datasets by preparing a roughly equal number between categories.
  • Make your model more robust by enabling the Create ML UI’s Augmentation options: Crop, Rotate, Blur, Expose, Noise, and Flip.
  • Include redundancy in your training set: Take lots of images at different angles, on different backgrounds, and in different lighting conditions. Simulate real-world camera capture, including noise and motion blur.
  • Photograph sample objects in your hand to simulate real-world users that try to classify objects in their hands.
  • Remove other objects, especially ones that you’d like to classify differently, from view.

Once you have gathered your Data Set make sure that you split the data set as a training data set and testing data set and place them in their respective directory

IMPORTANT NOTE ⚠ : Make sure you place the respective images in their corresponding directory inside the test directory.Because of the folder name act as Label for our images.

In our case we have two directories each having respective images

Building a Model 🔨

Apple has made this task much more simpler by automating major tasks.

With Core ML you can use an already trained machine learning model or build your own model to classify input data. The Vision framework works with Core ML to apply classification models to images, and to pre-process those images to make machine learning tasks easier and more reliable.

Just follow the below steps.

STEP 1: Pull open your X-code 🛠

STEP 2:Create a Blank Swift Playground

STEP 3: Clear the default code and add the following program and run the playground.

  import CreateMLUI //Import the required module
let builder = MLImageClassifierBuilder() //Create an instance for MLImageClassifierBuilder
builder.showInLiveView() //Shows the Xcode Model builder interface is assistant editor

Description :

Here we open the default model builder interface provided by the Xcode.

STEP 4: Drag the train directory into the training area.

Place the train directory in the training area denoted by dotted lines

Starter Tip 📝: We can also provide custom name to our model by clicking the down arrow in the training area

Step 5: Xcode will automatically process the image and start the training process. By default, the training takes 10 iterations time taken to train the model depend upon your Mac specs and Data set size. You can see the training progress in the Playground terminal window.

STEP 6: Once training is completed you can test your model by dragging the Test directory into the testing area. Xcode automatically Test your model and display the result.

Here you can see that our model has classified the images accurately 😎.

STEP 7: Save your model.

iOS App integration:

Step 1: Pull open your X-code .

Step 2: Create a Single Page iOS application .

STEP 3: Open up the project navigator .

STEP 4: Drag and drop the trained model into the project navigator.

Place your Model in Project navigator

STEP 5: Open up Main.storyboard and create a simple interface as shown below add the IBOutlets and IBActions for corresponding views.

Place UIImageView, UIButtons and UILabels

STEP 6: Open ViewController.swift file and add the following code as an extension.

  extension ViewController: UINavigationControllerDelegate, UIImagePickerControllerDelegate {  
func getimage() {
let imagePicker = UIImagePickerController()//Create an object for UIImagePickerController()
imagePicker.delegate = self //Set the delegate context
imagePicker.sourceType = .photoLibrary //Select the source as user's photo library
imagePicker.allowsEditing = true //Allow the user to crop the image
present(imagePicker, animated: true) //Pop up the UIPickerView
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {
let fimage = info[.editedImage] as!UIImage //Get the user selected image from info dictionary with key .editedImage Enum.
//Type cast the image as UIImage
fruitImageView.image = fimage //Set picked image to UIImageView
dismiss(animated: true, completion: nil) //Close the Image PickerView once user selects an image
}
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
dismiss(animated: true, completion: nil) //If the user does not want to pick an image then close the image picker view
}
}

Description: Here we create an extension for our ViewController class and implement UINavigationControllerDelegate and UIImagePickerControllerDelegate to pop an UIImagePickerView when user clicks PickImage UIButton. Make sure you set the delegate with context.

Steps Involved Accessing the Core ML Model in iOS App

Step 1: Make sure you import the following libraries.

import CoreML  
import Vision

Step 2: Create an instance for our Core ML model class.


let modelobj = ImageClassifier()

Step 3: To make the Core ML to perform classification we should first create a request of type VNCoreMLRequest (VN stands for Vision)

var myrequest: VNCoreMLRequest? //Create an instance for  VNCoreMLRequest
myrequest = VNCoreMLRequest(model: fruitmodel, completionHandler: { (request, error) in //Instantiate it by passing model object.
//This completion handler is called when request has been excecuted by Core ML
self.handleResult(request: request, error: error)//Call the user defined function

})

STEP 4: Make sure you crop the image so that it is compatible with the core ml model.

 myrequest!.imageCropAndScaleOption = .centerCrop  

STEP 5: Place the above codes inside a user defined function which returns the request object.

 func mlrequest() - > VNCoreMLRequest {  
var myrequest: VNCoreMLRequest ?
let modelobj = ImageClassifier()
do {
let fruitmodel =
try VNCoreMLModel(
for: modelobj.model)
myrequest = VNCoreMLRequest(model: fruitmodel, completionHandler: {

(request, error) in self.handleResult(request: request, error: error)

})
} catch {
print("Unable to create a request")
}
myrequest!.imageCropAndScaleOption = .centerCrop
return myrequest!
}

STEP 6: Now we should convert our UIImage to CIImage (CI:CoreImage) so that it can be used as an input for our core ml model.It can be done easily by creating an instance for CIImage and passing UIImage in the constructor.

 guard  let ciImage = CIImage(image: image)  else {  
return
}

STEP 7:Now we can handle our VNCoreMLRequest by creating a request handler and passing the ciImage.

let handler = VNImageRequestHandler(ciImage: ciImage)  

STEP 8:The request can be excecuted by calling perform() method and passing the VNCoreMLRequest as the parameter.

 DispatchQueue.global(qos: .userInitiated).async {  
let handler = VNImageRequestHandler(ciImage: ciImage)
do {
try handler.perform([self.mlrequest()])
} catch {
print("Failed to get the description")
}
}

Description: DispatchQueue is an object that manages the execution of tasks serially or concurrently on your app’s main thread or on a background thread.

STEP 10: Place the above code in an user defined function as shown below.

func excecuteRequest(image: UIImage) {  
guard
let ciImage = CIImage(image: image)
else {
return
}
DispatchQueue.global(qos: .userInitiated).async {
let handler = VNImageRequestHandler(ciImage: ciImage)
do {
try handler.perform([self.mlrequest()])
} catch {
print("Failed to get the description")
}
}

STEP 11:Create a user defined function called handleResult() which takes VNRequest object and error object as parameters this function will be called when VNCoreMLRequest has been completed.

Note 📓: DispatchQueue.main.async is used to update the UIKit objects (in our case it is UILabel) using UI Thread or Main Thread because all the classification tasks are done in background thread.

Note 📝:Make sure you have some oranges and strawberries pictures in photo library of your Simulator.

Full ViewController.swift file

import UIKit  
import CoreML
import Vision
class ViewController: UIViewController {
var name: String = ""
@IBOutlet weak
var fruitnamelbl: UILabel!@IBOutlet weak
var fruitImageView: UIImageView!override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
@IBAction func classifybtnclicked(_ sender: Any) {
excecuteRequest(image: fruitImageView.image!)
}
@IBAction func piclimage(_ sender: Any) {
getimage()
}
func mlrequest() - > VNCoreMLRequest {
var myrequest: VNCoreMLRequest ?
let modelobj = ImageClassifier()
do {
let fruitmodel =
try VNCoreMLModel(
for: modelobj.model)
myrequest = VNCoreMLRequest(model: fruitmodel, completionHandler: {
(request, error) in self.handleResult(request: request, error: error)
})
} catch {
print("Unable to create a request")
}
myrequest!.imageCropAndScaleOption = .centerCrop
return myrequest!
}
func excecuteRequest(image: UIImage) {
guard
let ciImage = CIImage(image: image)
else {
return
}
DispatchQueue.global(qos: .userInitiated).async {
let handler = VNImageRequestHandler(ciImage: ciImage)
do {
try handler.perform([self.mlrequest()])
} catch {
print("Failed to get the description")
}
}
}
func handleResult(request: VNRequest, error: Error ? ) {
if let classificationresult = request.results as ? [VNClassificationObservation] {
DispatchQueue.main.async {
self.fruitnamelbl.text = classificationresult.first!.identifier
print(classificationresult.first!.identifier)
}
}
else {
print("Unable to get the results")
}
}
}
extension ViewController: UINavigationControllerDelegate, UIImagePickerControllerDelegate {
func getimage() {
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.sourceType = .photoLibrary
imagePicker.allowsEditing = true
present(imagePicker, animated: true)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {
let fimage = info[.editedImage] as!UIImage
fruitImageView.image = fimage
dismiss(animated: true, completion: nil)
}
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
dismiss(animated: true, completion: nil)
}
}
Click the Pick Image button
Select any image
Click Classify button
Select another picture and click Classify

Hats off

You have built your first iOS app using Core ML.

--

--