Getting started with Machine Learning on iOS

Paul Nyondo
The Andela Way
Published in
3 min readJan 8, 2018

The past few months have had me earn a few battle scars in the world of iOS development with Swift while all the time I kept looking over the hedge into the world of machine learning/artificial intelligence cause you know a proverbial saying “grass is always greener on the other side.

Till the thought hit me; “lets try to do some machine learning on iOS” and now here we are.

This write up shows you how to get started with machine learning as an iOS engineer .We shall develop an application that does image classification with pre-trained models. If you have read this far and you are still wondering what the heck is machine learning learning;

Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.”

You can look up the final version of the code here.

Assumptions made

  • You are somewhat familiar with iOS and I can jump straight into the good stuff, machine learning.

Machine Learning model

Apple, with the coming of iOS 11 introduced CoreML which aids the integration of machine learning models into your iOS application.

We shall be using the MobileNet model. However there are several other models hosted by apple and you can access them here. The devices shall be used for mostly inference and no training will be taking place on the device.

Image classification Application Demo

Demo of the finished Image classification application

Steps

  1. Download the machine learning model from here, drag and drop into the Xcode project directory make sure that the target membership of the project is selected
  2. Since our application is basically going to do image classification, we shall leverage on the Vision framework also introduced in iOS 11. In a nutshell, Apple describes the Vision framework as “ high-performance image analysis and computer vision techniques to identify faces, detect features, and classify scenes in images and video
import CoreMLimport Vision

3. We shall be using the VNCoreMLRequest method to make an image analysis request to CoreML model to process images. Things to note; the VNCoreMLRequest requires a model of type VNCoreMLModel and completion handler so we convert the model in the lines below

let modelFile = MobileNet()let model = try! VNCoreMLModel(for: modelFile.model)

4. Next we write the completion handler, which extracts the result with the highest probability

5. Lets make the image analysis request to CoreML to process the image

let request = VNCoreMLRequest(model: model, completionHandler: processResults)

6. Finally we need a object that will process one or more image requests to the image this is found in the class VNImageRequestHandler .

let handler = VNImageRequestHandler(data: imageData)

Things to note:

  • This takes in the image data as bytes as opposed to the UIImage since the CoreModel expects data of type CVPixelBuffer as opposed to UIImage for performance reasons
  • Vision framework handles the conversion of the ImageData to CVPixelBuffer data

7. Perform the request

try! handler.perform([ request ])

Conclusion

Are you interested in Machine Learning, iOS Development or Programming in general, lets connect and learn together, If you have any questions feel free to ask. If you have some feedback, I would be glad to receive it.

Connect with me via LinkedIn

Till next time, never stop learning.

--

--