Recognize Age and Gender by using Machine Learning in iOS Application

How to create a new CoreML Model with CreateML or use existing model and integrate it in a Xcode project.

Luciano Amoroso
Apple Developer Academy | Federico II
4 min readApr 8, 2020

--

CoreML is a framework created by Apple to integrate machine learning models into your app. You can use CoreML APIs and user data to make predictions, and to train or fine-tune models, all on the user’s device. A model is the result of applying a machine learning algorithm to a set of training data. You can use a model to make predictions based on new input data. Models can accomplish a wide variety of tasks that would be difficult or impractical to write in code. For example, you can train a model to categorize photos, or detect specific objects within a photo directly from its pixels.

For this purpose we are going to create from scratch a gender-recognition model, using Apple’s CreateML utility, and implement an already existing model for age-recognition (AgeNet). This model is based on the convolutional neural network models for age.

Providing the data

One of the crucial aspects for creating a successful machine-learning model is providing a good chunk of training data that contains various pictures taken from different angles and in various lighting setups. We provided various pictures of women and men that satisfy these requirements in order to train our model. Training data should make around 80% of the data you are going to provide, and it should be organised with a simple folder structure. Instead, for the testing data should make around 20% of the data you are going to test.

Creating the model

With Xcode running, right click on its icon and select Open Developer Tool -> CreateML

Select where to store the model, then choose Image Classifier and give it a name and a description.

Then select the folders for training data and testing data and select an amount of iterations.

Validating the result

Once the training session is completed, we will be able to see the results and even test the model even further by giving it pictures and test how good its recognition works.

The UI shows a graph of the model’s accuracy progress with each training iteration, as well as the precision and recall details for each image class.

The precision is the number of true-positives divided by the sum of true-positives and false-positives. Recall is the number of true-positives divided by the sum of true-positives and false-negatives.

Integrate the model into your app

Adding the model to the project is very simple, whenever you created it yourself or downloaded it. Just go to the root directory and place it there or drag it directly into the root folder in Xcode

Now this model to see the input and output parameters and other model information.

Additionally, we can go on the Model section to check out the details and see how it works.

Now we can integrate it!

Step 1: Import Core ML and the Vision framework into your ViewController.swift file.

Step 2: Create a detectGender() function into the file. What this function will do is initialise your model and subsequently pass it an image as input, process it and output the prediction. The request’s completion handler is used to retrieve and print the prediction result. The same is true for the detectAge () function

Step 3: Now you have to call the newly created function when your image picker has selected the photo. Add the call and assignment of the photo you just took. Add the method call in the didFinishPickingMediaWithInfo delegate method added earlier.

The complete code is available here! (ML Models are included)

I hope you found this article helpful. If you have any questions, feel free to comment below and I’ll answer it as soon as I can. Thanks!

Made by: machine.ly
- Amoroso Luciano
- Gianfranco Caserta
- Giovanni Carfora

--

--