Introduction to CoreML

Add Machine Learning features into your iOS apps

What is CoreML?

CoreML is a new machine learning framework introduced by Apple. You can use this framework to build more intelligent Siri, Camera, and QuickType features. Developers can now implement machine learning in their apps with just a few lines of code. CoreML is a great framework to get you introduced to machine learning.

CoreML provides ready-to-use models that you can integrate into your iOS apps. I have decided to write a short tutorial and show you how to detect the scene of an image using the Place205-GoogLeNet model.

Place205-GoogLeNet Integration

First of all, you will need to download the Place205-GoogLeNet model from https://developer.apple.com/machine-learning/. Scroll down until you see the Models section, and locate the model that we need. Then, press on Download Core ML Model and it will start downloading right away. When the download finishes, just import the .mlmodel file in your project. 👇

In order to test the model properly, we will need a camera or photo library feature. Just visit this link where you can find a copy-paste camera solution.


CoreML + Vision

Assuming that your project contains everything that was described above, we can continue with the implementation of CoreML and Vision using Swift.

Vision applies high-performance image analysis and computer vision techniques to identify faces, detect features, and classify scenes in images and video.

The first thing you need to do is import the CoreML and Vision frameworks at the top of your .swift file. After that, we will create one function that will contain the following code:

  1. Loading of the ML model
  2. Creating the Vision request
  3. Running CoreML GoogLeNetPlaces

If everything goes well, the VNCoreMLRequest will return an array of VNClassificationObservation objects. The identifier will return the name of the scene, and the confidence will return a number between 0 and 1. That number represents the probability the classification is correct.

If you can see in the code above, I am doing an iteration on the results array for presentational purposes. Have in mind that the results array is sorted by confidence, so the first object has the greatest confidence level.

if let img = CIImage(image: image){
    self.scanImage(image: img)
}

I hope that this short tutorial on how CoreML works helped you understand the basics of implementing machine learning features into your iOS apps. If you liked my story please share or 👏 so others can read it too. 🤖

You can find more CoreML models by visiting the below link. 👇