
Image Recognition using Core ML and Vision Framework
What’s Core ML?
Core ML is the foundation for domain-specific frameworks and functionality. Core ML supports Vision for image analysis, Foundation for natural language processing (for example, the NSLinguisticTagger class), and GameplayKit for evaluating learned decision trees. Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders.
With Core ML, you can integrate trained machine learning models into your app.

Core ML opens up many possibilities for developers to build features such asimage recognition, Natural Language Processing (NLP), text prediction, etc.
Now you will be thinking that it might be very difficult to achieve this type of AI to the app, but to your surprise, Core ML is very easy to use. In this tutorial, we will see that it only takes few lines of code to integrate Core ML into our apps.
Isn’t that cool, right? Let’s get started.
App Overview
The app we are trying to make is very simple. Our app will allows user either to take a picture from camera or choose a photo from their photo library. Then, the machine learning algorithm will try to predict what the object is in the picture. The result might not be accurate, but you will get an idea how you can apply Core ML in your app.
Let’s get started.
Create Project
To begin, create a new project using Xcode 9 and select the single-view application template and make sure language is set to Swift.

Create the User Interface
Let’s first move on the Main.storyboard file and add some UI elements. Add UIImageView,UILabel and UIButton in the view.

By default, I had added one default image to the UIImageView, after that there is one UILabel which will display the prediction answer with its confidence level. And at last, there is one UIButton which will help us picking up image from Camera or Photo Library.
Let’s move ahead.
Move on to the ViewController.swift file and import the Core ML and Vision right below the UIKit import statement
Now, add IBOutlets to the objects
Implementing Camera and Photo Library functions
Add the below delegates which will be required to pick the image and initialize the UIImagePickerController.
Now, insert the UIImagePickerControllerDelegate method.
Here we are simply picking UIImage from the UIImagePIckerController and setting it to our UIImageView and passing the same image to detectScene() method.
Now, you must be wondering what is the use detectScene() method?. That’s the important part of this app.
First, just insert the detectScene() method in your app
Here we are displaying the default text “Detecting image…” to the UILabel while the Core ML does it’s work. Then, we are initializing the Core ML model, here we are using Inceptionv3 model for this demo app.
You can download more models from Apple’s Machine Learning page .
After that, we are creating a Vision request with completion handler which will provide us the prediction output with the confidence level.
Now, we will also require to call this method when the view loads with our default image. So simply replace your viewDidLoad() method with below lines of code.
Now, create the action method which will help you to take picture from either camera or photo library.
That’s it, now run the project and you can see the output.

Here, you can see the accuracy is not up to mark but still we can get a clear idea how we can use Core ML model. You can have more idea from the below links:

