Image Classification using Create ML, Core ML and Vision Framework in Swift

Priya Talreja
Aug 28 · 5 min read

Create ML lets you build, train, and deploy machine learning models with no machine learning expertise required.

Core ML models are bundled into apps to help drive intelligent features like search or object recognition in photos.

Vision Framework performs face detection, text detection, barcode recognition, image registration, and general feature tracking.

We are going to build a demo app which recognizing if the animal is cat or dog using Core ML.

Train the Model

We are going to train the image classifier in Xcode. Directory Structure-based format is used. Images used for training need to be grouped by folders.

Things required,

  1. Training Data
  • Top Folder named “Training Data”.
  • Subfolders labeled by content. (eg:- Folder named Dog)
  • Copy as many as possible images in each subfolder.
  • Image name doesn’t matter, what matters is the content of the image.
  • Use 300 x 300 pixels images or higher resolution images.
  • Diverse Images.
  • Balance the count of images in subfolders.

2. Testing Data

  • Top Folder named “Testing Data”.
  • Random Images to evaluate the model.
  • Avoid overlaps there should be no overlap between the images in the “Testing Data” folder and in the image sets from “Training Data”.

Now once the data is ready, we need to Train the model in Swift Playground.
Create ML doesn’t work on mobile devices we need to create macOS playground

  1. Create a Playground.

Open Xcode, File → New → Playground → macOS → Blank.
Click “Next”.
Give a suitable name and click Create.

2. Enter the below code in the playground.

import CreateMLUIlet imageClassifierBuilder = MLImageClassifierBuilder()
imageClassifierBuilder.showInLiveView()

3. Run the code, you won't see anything if Assistant Editor is not open.
So let’s do that by pressing option + command + enter.

4. Now you will be able to see below screen,

5. We need to drag the training data model into the area within the ImageClassifier window to start training the model. Processing of Training data takes times depending on the size of data and performance of the computer.

6. After the processing is completed, Live View shows Model Accuracy. You can test the model by dragging the Testing Data into an area named “Drag Images to Begin Testing”.

7. We can save our trained model by doing clicking the down arrow and Click Save. You can change the name of the Model.

Note: After saving the model, you can check the size. The trained model size is very small (in KB). We used almost 250 MB images to train this model.

Now, we are going to use the generated model in an iOS app that can recognize a cat or dog in images taken with the camera or from Gallery.

iOS Demo App

Let's start the Xcode project for developing the iOS app in Swift.

  1. Drag the ImageClassifier.mlmodel in the Xcode project.
  2. In the storyboard, create a view with UIImageView, Take Photo Button and Label for showing the result.
  3. In ViewController Class, write the code for UIImagePickerController and its Delegate.
  4. Now, we need to do MLModelSetup.
    `ImageClassifier` class is automatically generated for the corresponding model.
    Replace `ImageClassifier` with your model’s generated Swift class.
lazy var classificationRequest: VNCoreMLRequest = {
do {
let model = try VNCoreMLModel(for: ImageClassifier().model)
let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
self?.processClassifications(for: request, error: error)
})
request.imageCropAndScaleOption = .centerCrop
return request
} catch {
fatalError("Failed to load Vision ML model: \(error)")
}}()
  • The classificationRequest property is the image analysis request that uses the Core ML model to process images. VNCoreMLModel gets instantiated.
  • The VNCoreMLRequest uses a VNCoreMLModel, that is based on a CoreML MLModel object, to run predictions with that model. Depending on the model the returned observation is either a VNClassificationObservation for classifier models, VNPixelBufferObservations for image-to-image models or VNMLFeatureValueObservation for everything else.
  • Request’s results property contains the results of executing the request.
  • Completion handler calls the process classification method for evaluating the results and updating the UI.

Before process classification, we need to perform Classification Request

5. Perform Classification Request.

  • UIImage is converted from UIImage to CIImage an image representation compatible with core image filters.
  • If UIImage to CIImage conversion fails, a fatal error is thrown and further classification cannot be done.
  • The VNImageRequestHandler is created with an image that is used to be used for the requests a client might want to schedule.
  • Image processing task is heavy so we should block the Main Thread.
func createClassificationsRequest(for image: UIImage) {predictionLabel.text = "Classifying..."let orientation = CGImagePropertyOrientation(image.imageOrientation)guard let ciImage = CIImage(image: image) 
else {
fatalError("Unable to create \(CIImage.self) from \(image).")
}
DispatchQueue.global(qos: .userInitiated).async { let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation)
do {
try handler.perform([self.classificationRequest])
}catch {
print("Failed to perform \n\(error.localizedDescription)")
}
}
}

6. Process Classifications

  • This function updates the UI with the results of the classification.
  • UI Updates are performed on Main Thread.
  • VNClassificationObservation is the observation returned by VNCoreMLRequests that using a model that is a classifier. A classifier produces an array of classifications which are labels and confidence scores.
func processClassifications(for request: VNRequest, error: Error?) {DispatchQueue.main.async {
guard let results = request.results
else {
self.predictionLabel.text = "Unable to classify image.\n\(error!.localizedDescription)"
return
}
let classifications = results as! [VNClassificationObservation] if classifications.isEmpty {
self.predictionLabel.text = "Nothing recognized."
} else {
let topClassifications = classifications.prefix(2)
let descriptions = topClassifications.map { classification in
return String(format: "(%.2f) %@", classification.confidence, classification.identifier)
}
self.predictionLabel.text = descriptions.joined(separator: " |")
}
}

7. In didFinishPickingMediaWithInfo Delegate Method or wherever you get Image. Call the function createClassificationsRequest with input parameter UIImage

createClassificationsRequest(for: image)

Now iOS App to recognize animals is Complete!

Thank you. Hope this helps you.

Data Driven Investor

from confusion to clarity, not insanity

Priya Talreja

Written by

iOS Developer

Data Driven Investor

from confusion to clarity, not insanity

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade