Image Classification using Create ML, Core ML and Vision Framework in Swift

Priya Talreja
Aug 28, 2019 · 5 min read
Image for post
Image for post

Create ML lets you build, train, and deploy machine learning models with no machine learning expertise required.

Core ML models are bundled into apps to help drive intelligent features like search or object recognition in photos.

Vision Framework performs face detection, text detection, barcode recognition, image registration, and general feature tracking.

We are going to build a demo app which recognizing if the animal is cat or dog using Core ML.

Train the Model

We are going to train the image classifier in Xcode. Directory Structure-based format is used. Images used for training need to be grouped by folders.

Things required,

  1. Training Data
  • Top Folder named “Training Data”.
  • Subfolders labeled by content. (eg:- Folder named Dog)
  • Copy as many as possible images in each subfolder.
  • Image name doesn’t matter, what matters is the content of the image.
  • Use 300 x 300 pixels images or higher resolution images.
  • Diverse Images.
  • Balance the count of images in subfolders.

2. Testing Data

  • Top Folder named “Testing Data”.
  • Random Images to evaluate the model.
  • Avoid overlaps there should be no overlap between the images in the “Testing Data” folder and in the image sets from “Training Data”.

Now once the data is ready, we need to Train the model in Swift Playground.
Create ML doesn’t work on mobile devices we need to create macOS playground

  1. Create a Playground.

Open Xcode, File → New → Playground → macOS → Blank.
Click “Next”.
Give a suitable name and click Create.

2. Enter the below code in the playground.

import CreateMLUIlet imageClassifierBuilder = MLImageClassifierBuilder()

3. Run the code, you won't see anything if Assistant Editor is not open.
So let’s do that by pressing option + command + enter.

4. Now you will be able to see below screen,

Image for post
Image for post

5. We need to drag the training data model into the area within the ImageClassifier window to start training the model. Processing of Training data takes times depending on the size of data and performance of the computer.

6. After the processing is completed, Live View shows Model Accuracy. You can test the model by dragging the Testing Data into an area named “Drag Images to Begin Testing”.

Image for post
Image for post

7. We can save our trained model by doing clicking the down arrow and Click Save. You can change the name of the Model.

Image for post
Image for post

Note: After saving the model, you can check the size. The trained model size is very small (in KB). We used almost 250 MB images to train this model.

Now, we are going to use the generated model in an iOS app that can recognize a cat or dog in images taken with the camera or from Gallery.

iOS Demo App

Let's start the Xcode project for developing the iOS app in Swift.

  1. Drag the ImageClassifier.mlmodel in the Xcode project.
  2. In the storyboard, create a view with UIImageView, Take Photo Button and Label for showing the result.
  3. In ViewController Class, write the code for UIImagePickerController and its Delegate.
  4. Now, we need to do MLModelSetup.
    `ImageClassifier` class is automatically generated for the corresponding model.
    Replace `ImageClassifier` with your model’s generated Swift class.
lazy var classificationRequest: VNCoreMLRequest = {
do {
let model = try VNCoreMLModel(for: ImageClassifier().model)
let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
self?.processClassifications(for: request, error: error)
request.imageCropAndScaleOption = .centerCrop
return request
} catch {
fatalError("Failed to load Vision ML model: \(error)")
  • The classificationRequest property is the image analysis request that uses the Core ML model to process images. VNCoreMLModel gets instantiated.
  • The VNCoreMLRequest uses a VNCoreMLModel, that is based on a CoreML MLModel object, to run predictions with that model. Depending on the model the returned observation is either a VNClassificationObservation for classifier models, VNPixelBufferObservations for image-to-image models or VNMLFeatureValueObservation for everything else.
  • Request’s results property contains the results of executing the request.
  • Completion handler calls the process classification method for evaluating the results and updating the UI.

Before process classification, we need to perform Classification Request

5. Perform Classification Request.

  • UIImage is converted from UIImage to CIImage an image representation compatible with core image filters.
  • If UIImage to CIImage conversion fails, a fatal error is thrown and further classification cannot be done.
  • The VNImageRequestHandler is created with an image that is used to be used for the requests a client might want to schedule.
  • Image processing task is heavy so we should block the Main Thread.
func createClassificationsRequest(for image: UIImage) {predictionLabel.text = "Classifying..."let orientation = CGImagePropertyOrientation(image.imageOrientation)guard let ciImage = CIImage(image: image) 
else {
fatalError("Unable to create \(CIImage.self) from \(image).")
} .userInitiated).async { let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation)
do {
try handler.perform([self.classificationRequest])
}catch {
print("Failed to perform \n\(error.localizedDescription)")

6. Process Classifications

  • This function updates the UI with the results of the classification.
  • UI Updates are performed on Main Thread.
  • VNClassificationObservation is the observation returned by VNCoreMLRequests that using a model that is a classifier. A classifier produces an array of classifications which are labels and confidence scores.
func processClassifications(for request: VNRequest, error: Error?) {DispatchQueue.main.async {
guard let results = request.results
else {
self.predictionLabel.text = "Unable to classify image.\n\(error!.localizedDescription)"
let classifications = results as! [VNClassificationObservation] if classifications.isEmpty {
self.predictionLabel.text = "Nothing recognized."
} else {
let topClassifications = classifications.prefix(2)
let descriptions = { classification in
return String(format: "(%.2f) %@", classification.confidence, classification.identifier)
self.predictionLabel.text = descriptions.joined(separator: " |")

7. In didFinishPickingMediaWithInfo Delegate Method or wherever you get Image. Call the function createClassificationsRequest with input parameter UIImage

createClassificationsRequest(for: image)

Now iOS App to recognize animals is Complete!

Thank you. Hope this helps you.

Data Driven Investor

empowering you with data, knowledge, and expertise

Sign up for DDIntel

By Data Driven Investor

In each issue we share the best stories from the Data-Driven Investor's expert community. Take a look

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Priya Talreja

Written by

iOS Developer

Data Driven Investor

empowering you with data, knowledge, and expertise

Priya Talreja

Written by

iOS Developer

Data Driven Investor

empowering you with data, knowledge, and expertise

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store