Machine Learning + SwiftUI are Extremely Amazing

Hi! Ridho Saputra
5 min readAug 11, 2022

--

Source from Vecteezy

Hola!
A week ago, as usual, I followed a series of events by Apple Developer Academy. The event this time is Nano Challenge, the ‘bridge’ between one challenge and another. As always, every challenge has its big idea, and idea on this time is quite interesting:

How can we improve our learning in the Academy?

I took a few minutes to figure out what is the most difficult that I face at the Academy and I’m sure the problem was about Learning Objectives. Because, for me, it needs a long time enough to update the learning objective on my learning journey. And by that reason, I decided to create an automation that can update the learning objective.

Talking about automation, I’m sure the first thing that comes to mind is “How?”. The answer is Machine Learning, that’s why it is used in this solution. Firstly the Machine Learning will be training to know about the learning objective on some literature, so if I finish reading from a website or document the machine can determine any available learning objectives.

Into the Machine Learning

As I said in the last paragraph, I used Machine Learning so later the app can show any learning objective from a website or document I have read before. And when going to training Machine Learning, there is a major thing that needs to be prepared, the data. To get the data I need, I choose a simple way named Web Scraping.

There are a hundred ways to do web scraping with several programming languages, but I choose to use Python as my main programming language because it is easy and fast for development. And with BeautifulSoup as the library for the scraping, make the development such a piece of cake.

Here is a sneak peek of the code:

And as the result, will come like this:

Because the data is already available, now let’s start to train the Machine Learning. I use the Create ML from Apple to help me with everything about ML things. It’s easy to use, we can choose a template as shown below:

But for this project, I need to create a Text Classification template, and after I’ve selected the template, I just need to drag or choose the data that was already prepared into Create ML. It will be something like this:

But there is a step I forgot about, the Splitting Data, I need to slice the data become two sets, the actual training data, and the test data. Usually apportion the data, with an 80–20 split. After everything was ready, just click the button ‘train’ on the left top and the result will come in several minutes. Here the result:

Into the SwiftUI

After finished training the Machine Learning, I can continue designing and developing the application. So, as in the Software Development Life Cycle before implementing the application, it is necessary to design first. I use Sketch for designing the Low Fidelity and High Fidelity. And for the result of High Fidelity (Hi-Fi), here you are:

Let’s begin developing the application since the Hi-Fi has already been created. For this project, I used SwiftUI instead UIKit because I want to get deeper into SwiftUI, and I know it will be easier to create an interface with SwiftUI.

For this project, I learned to use the MVVM design pattern. It’s kind of hard to figure out how MVVM works in Swift but I can trough that problem. If I try explain one by one:

Model

For the model that represents the content in this project, there are two main models, the Document and the Objective. The Document is a model that represents information that gets when I choose a file document from storage. The Objective is a model that represents information that I have on my learning journey. And when coded it becomes:

View

For the view that represents the layout and appearance of what a user sees on the screen, there are four main views, Home, Result, Detail, and Document Picker. Here is one example of the code:

ViewModel

For the view model that represents business logic, it serves as the link between the model and the view. The main difference between the view model and the presenter on Model-View-Presenter (MVP) design pattern is that the view model doesn’t have a reference to a view, but a view directly binds to properties on the view to send and receive updates automatically. This is a code of view model:

Final Thoughts

From my experience building this project, I didn’t expect embedding machine learning in an iOS app was this easy. And with SwiftUI you can create a view or layout faster than using UIKit and even you can create custom views easier and more advanced. So I hope with this publication, people who want to try to add machine learning into iOS apps know how easy it is to make it.

--

--