The most common way to create a Core ML model is using Playground. But what if you want to be able to add more data from your app and retrain your model? You won’t be able to do that if you add your model to your iOS app, so you have to do it on the backend side. In case you didn’t already know, you can write server-side code using Swift, which is so cool! 😎 In this tutorial, you will see how you can create a model and save it on the disk using Perfect, the Server-Side Swift framework.
Step 1. Create the Xcode project
First, you need to download the template for the project. Open the terminal window, go to the folder where you want to create your project and write the following command:
Now you have a new folder named ‘PerfectTemplate’, containing a Package.swift file and a Sources directory. Open the Package.swift file and change the name property to the name of your project.
Now let’s create the xcodeproj. In the same terminal window navigate to the new folder, ‘PerfectTemplate’ and add the following command:
swift package generate-xcodeproj
Now you have the project that can be opened using Xcode.
Step 2. Prepare the data
You will need two sets of data: training data and evaluation data. The training folder should contain 80% of the photos and the evaluation 20%. In both folders, the data should be structured in folders, each one containing images of what you want your model to be able to recognize. For example, if you want to identify fruits, you will need a subfolder named apple in both training and evaluation folders, in which you will have multiple different images of apples. Another one named banana, that should follow the same logic, and so on. You should have at least 10 images of each category in the training folder.
Step 3. Create and train the model
Open the main.swift file from your project and import CreateML and Foundation.
Now we are going to create a new function that will contain all the logic of creating, training and saving the model. It will have two parameters: request and response. First, we will create an MLImageClassifier that will have the path to the training folder and some parameters:
- maxIterations: the maximum number of iterations used during training
- augmentationOptions: optional parameter that can allow us to multiply our images in order to have a bigger data set; it can be set to any combination of the following values: blur, flip, exposure, noise, rotation, crop; using these options will increase a lot the training time so for this tutorial we are not going to use it
After we have the classifier we will evaluate its performance using the evaluation data we prepared earlier. If the result is not satisfactory we can retrain the model using different parameters.
The model will then be saved to disk and a response message will be set as a body to the response.
We also have to add the route that will call this method.
routes.add(method: .get, uri: "/trainModel", handler: createModelHandler)
Now you can run the project on ‘My Mac’ and test the new route we added. Access the following path in browser http://localhost:8181/trainModel and wait for the model to be trained. You can follow the progress in the debugger window.
You can create and retrain the model how many times you want on the server-side using CreateML. You can access the whole code on Github. I hope this article was helpful and easy to understand 😊 . If you have any questions don’t hesitate to leave me a comment below! Thanks!
Zipper Studios is a group of passionate engineers helping startups and well-established companies build their mobile products. Our clients are leaders in the fields of health and fitness, AI, and Machine Learning. We love to talk to likeminded people who want to innovate in the world of mobile so drop us a line here.