My most interesting iOS coding challenge for a job vacancy (Chapter 1)— Challenges I faced when using PhotoKit and Core Data

Marchell
7 min readDec 27, 2022

--

This is the first time I write on Medium and I’m very excited to share my story. I’m not the best writer in the world but I hope y’all could learn something from this story.

Photo by Christopher Gower on Unsplash

So in late 2022, I was planning to look for a new opportunity and resign from my current job. About a week ago as I’m writing this story, I found a vacancy that I think suits my experience (I would say I’m still relatively new to iOS programming). So naturally, I applied it. And also naturally, the HR of the company gave me some sort of a coding challenge. I was told to make an app that groups images using a QR code. So my take on how the app works would be something like this:

Open the app -> Arrive at the group list page -> Go to the add new group page -> Input the required fields -> Hit the save button -> Back to the group list page with the new data appearing in the list

So my first thought is, what frameworks should I use to develop this app? Oh yeah, PhotoKit and Core Data! I’ve used Core Data quite a bit in my job before so I would say I’m quite familiar with the framework. But it’s my first time using PhotoKit. But with the help of Apple’s documentation and good ole’ StackOverflow, I manage to get the hang of it.

So what problems I faced during the development of this simple app? I’ll divide this into a few points.

1. Pixelated Images

Using PhotoKit (specifically PhotosUI), you can load Photos from your Photo gallery in the default Photos app. It’s actually pretty simple, you just use the PHPickerViewController, set the picker configuration, set the delegate, and present it. That’s it.

Presenting a PHPickerViewController

And also you need to implement the handler of what to do after you finish picking the photos. You can use the picker(_:didFinishPicking:)method from the PHPickerViewControllerDelegateprotocol. Mine go something like this:

Getting all of the images fetch results and storing them in an array

After all of the required fields are filled, I save the model into Core Data. Seems fine right? Well, not really. Unfortunately, when I load the group model and enlarge the image, it shows a pixelated image like this:

Not pleasant for your eyes too, right?

What went wrong here? Turns out that the problem is the PHAssetto UIImageconverter that I made. Originally it looks something like this:

My PHAsset to UIImage converter (before specifying the options)

As you can see here I didn’t specify the options for the imageManager object when requesting an Image. So I would think that probably the request is assuming that I want the fastest loading time when requesting an image, but with the cost of the image’s quality. So after a bit of research, I specify the option using PHImageRequestOption and after that it look something like this:

My PHAsset to UIImage converter (after specifying the options)

So what I did here was I created a PHImageRequestOption to specify the quality of the image I want to get from a PHAsset object. I set the resizeMode to .exact to get the image with its original size, then I set the deliveryMode to .highQualityFormat to get the highest quality image from the PHAsset object. And voila! The image looks better when I load it.

The image looks so much better, but does the problem ends here? Well not really. Let’s talk about the other challenge that I faced.

2. Terrible performance for saving and loading models

This is probably the most interesting for me but also my biggest mistake when making this app. Long story short, I save my image group model to Core Data. At first, I asked myself, is it possible to save custom object types to Core Data? Yes, you can. What I’m trying to do is save my image group model, which mostly contains String properties. But also, the images I’m trying to save to Core Data were of type NSData. After not-so-long research, what I needed to do is to create a custom data transformer using NSSecureUnarchiveFromDataTransformer. My Image Group Core Data entity looked like this:

Image Group Entity

Based on that, I want to make the images property of type Array of NSData. So I decided to make the value transformer like this:

Data Array Value Transformer

And then save the entity like this:

Saving a GroupModel to Core Data, I created a static function to convert the GroupModel to a Core Data entity

Seems okay right? Nope. After I tested to see whether the value transformer works as expected, my app freezes quite a number of times when saving a group model and reloading the list of group models. I thought “Wow, this is strange. What’s happening here?”. Turns out that when I use the debug navigator while running the app, my app was using the CPU power very intensively and was using the memory quite a lot too. Also, my phone was quite warm after saving a model.

The CPU percentage, you hate to see that.
Memory usage was not looking good too.

Not good. So what happened here? Turns out that saving huge NSData objects into Core Data is not a good practice. I’m curious about how big the image data size was, and yeah it was pretty big. Usually, photos that I took with my phone were sized around 13 MB and more.

These photos are huge.

So after some yet another research, turns out it’s not good to save a binary object (like NSData) to Core Data if it’s more than 1 MB. So what I did was not a good way to store custom objects. Here’s a StackOverflow post that explains it.

When it comes to binary data you should determine how to store it based on the expected size of data you are going to be working with. The rule is:

- Less than 100K; store as a binary property in your main table

- Less than 1M; store as a binary property in an ancillary table to avoid over fetching

- Greater than 1M; store on disk and store its file path in the Core Data table.

So now what I did was I saved the image objects into disk by using FileManager and store the image name into Core Data (replacing the images property into an Array of Strings in my Core Data entity). First, I created a custom image model containing the image id and the image itself using UIImage. Then, when I try to save or load the group data, I just load the image using FileManager and create the image model based on the loaded data. Don’t forget to replace the transformer in the Core Data model and the custom class as an Array of Strings. I also downsample the UIImage before showing it in a UIImageView for even better performance. If you want to know more about image downsampling, I recommend watching the WWDC session about Images and Graphics best practices.

Creating an Image Model with id as the identifier to store in the device’s disk.
Saving the Data into disk using FileManager
Replacing the images property in Core Data entity into Array of Strings

Great! Now for the moment of truth. Is all of these changes actually affect the app’s performance? It did! The app was running so much better. No more freezing, and CPU and GPU usage were so much better.

Memory use after the changes.
CPU percentage usage after the changes.

So what I learned from making this app? I learned the basics of the PhotoKit framework, the importance of setting the PHImageRequestOptions when requesting an Image using PHImageManager, and lastly how to efficiently store data in Core Data model. In the future, I’m planning to improve the saving procedure by saving the PHAsset asset identifier for the image path, so I don’t have to copy the images and store them on the device’s disk. I think it will be a better approach for saving the images into the group model. Thank you for reading, I hope y’all could learn something from this. Have a great day!

--

--

Marchell

Noob iOS developer trying to git gud. I write to learn.