How I Optimize Memory Consumption for Content-Rich Apps

Fahri Novaldi
11 min readDec 3, 2022

--

9 Months ago, I’ve been starting my journey as a learner at Apple Developer Academy @Binus, and I literally didn’t know much & zero experience in iOS Development. But I barely survive for the past month because of a very supportive learning environment (and competitive at the same time 😵).

Long story short, as a learner, I learned so much knowledge in the past month and the most ingrained lesson for me is the “why” mentality and problem-solving framework.

We have countless approaches to solving a certain problem as CS students we generalize it into two types Depth-first & Breadth-first. But the optimal one is the combination of both of them.

Just like Thanos said all things should be balanced.

I’ve found plenty of superb research papers about it but this article won't cover much about it, if you are curious about it you could read one of them here.

Recently, I as a team develop (and still developing 😵‍💫) an app called Hayo! in Macro Challenge.

Hayo! an App that helps users manage their event

As you can see, our app interface is very eye-catching (kudos to our design team who were willing to struggle until the last minute ✊😧).

From a technical perspective, the number of graphics content is linear to memory consumption. It also reminds me of my first sweetest mistake at the academy in developing Kelana (my first iOS App). Kelana is considered a content-rich app because of its numerous High-Resolution image.

“We don’t make mistakes, just happy little accidents.” — Bob Ross

Kelana’s performance was poor; the homepage displayed 6 4K Images and consumed ≈ 350MB of RAM. It consumes so much RAM because of UIImageView class uses the raw downloaded image (4K) for the UI even though the user only sees it as a small thumbnail. and I didn’t notice it until a few months ago by watching closely what was going on in the heap allocation memory using Instrument.

Both of them (Kelana & Hayo) resemble similar things like horizontal collection views, heavy remote image assets, and very rich visual content (Content-Rich).

So before the Development of Hayo! began, I’m taking a few steps back to learn something from the past journey as a learner, and now I'm very excited to tell you about the thinking process and break it down.

*P.S. There are so many things I need to learn in iOS Development, what I’m going to tell you is about my journey perspective as a junior learner. if you have a better approach or best practice let me know in the comment below, so the others can learn about it too.

In this case, I’m going to start a mock project that resembles the characteristic of Kelana and Hayo! an ultra high-definition image from a remote URL.

#1 Know what’s the problem and why is it happening

Being aware of what’s happening in your App is the first thing you need to do. If you don’t know why the problem exists, then you’re going to solve them blindly and maybe ended up brute-forcing StackOverflow’s answers one by one (*this is not a good approach & I’m too lazy doing that).

In this case, the app should load an image from a remote URL. Since UIImageView didn’t have the ability to load from a URL, we could add an extension to UIImageView

the code was originally taken from Kelana repo & enhanced for threading

Why does it have nested background & main threads?

I separate image fetching on a background thread & UI updates on main thread (performance related)

Voila, we solved our first problem & implemented an app that loaded image from remote URLs. But waitt, Is the app feel laggy or is it just me? Is chrom* just eating my RAM? is k&#*&@vs a&$n? Why would it a^*@$@ek? 😵 😵

#2 Don't feel overwhelmed, rushed, or useless for solving one but making more problem.

Do you felt overwhelmed yet? Slow down Romeo, we’re gonna solve it a bit by bit.

an app that shows 1 image & 1 label consumes 100MB of RAM 💀; the first cost of our solution

As you can see, the user didn’t know if the app is lagging or not. So the next thing we could do is implement a loading indicator for a better UX.

a stolen code from StackOverflow 🤧

Woohoo 🎉 we’ve implemented an indicator that makes the user experience better and loading felt shorter. The memory consumption still bad but we’re one step closer.

From my experience with Kelana apps …

UIImageView class uses the raw downloaded image (4K) for the UI even though the user only sees it as a small thumbnail

So the unmoderated content size is our main concern, let's break it down using root-cause analysis.

  • The bigger content size, the bigger memory consumption so space complexity is linear.
  • If memory usage is spiking, then the memory footprint is getting dirty
  • If memory usage is rising oddly, the app responsiveness is lesser
  • The cleaner memory footprint, the smaller chance app is killed by the iOS garbage collector
The app eats 800MB of RAM to display 8 Photo 🫠🫠

fact: Memory usage is related to the dimensions of the image, not the file size.

We have two options to tackle these problems. First, we request a resized image by the size we need. Second, we resize the image on-device and keep the resized version.

The first approach sounds like a backend approach and beyond our expertise as an iOS Engineer, so the second approach is more feasible.

Loading indicator for better UX experience; a child problem caused by unmoderated remote content

#3 Don’t Rush, Write down all the possible solutions

Did we solve one issue just to create many other issues? are we on the right track?

When we face these situations, naturally as a human our urge to solve the problem by using a no-brainer solution is linear to the number of issues.

Maybe one of your friends had gibberish (or yourself) about cache since we face certain memory & performance issues, right? So let’s break it down

As we know, according to apple docs NSCache is a key-value pair object that stores a generic object in a container. It also said that it could minimize the memory footprint.

Let’s think again, cache could minimize the memory footprint but if we store a 4K image in the cache what would happen?? First, it would solve the responsiveness because it minimizes the computation.

Second, the cache also consumes space as much as content size and if the cache size is smaller than the object it's completely useless. Ask yourself, is the time-saving worth the amount of memory space used?

if you don’t understand how the cache works, shame on you. JK

Key-point: cache should be used if-only-if the data is frequently used and an expensive computing resource

So in this case, image caching is too early & not the best solution. Let's take a step back and see from a different angle.

On a daily basis, we often face certain cases where we should reduce the file size. There are so many ways to reduce file size in this context, it’s called rasterization. when we compress an image document what’s actually happening behind the bit? TLDR look at the illustration below

https://www.techspot.com/article/1888-how-to-3d-rendering-rasterization-ray-tracing/

It reminds me of back when I was learning web development in 4th sem using NextJS. NextJS use rasterization for image element optimization. In oversimplified terms, image resolution is compressed precisely as big as the frame of <img> on screen so it reduces memory usage.

The theoretical hypothesis for what we were going to do

In Phot*shop & Next.JS, rasterizing is just one click away but how do we do this in Swift? Unfortunately, there’s no built-in function on UIKit (pls add this for the next update 😖). Then we should dive into how image rendering flow at UIKit.

According to WWDC 18 Session 219 Image and Best Practice by Kyle Sudler, here’s how the relationship between UIImage & UIImageView.

Image Rendering pipeline. WWDC18

Yes the video is 4 years old, but it’s never too late to learn, right? 😉

It also said we could proactively save memory in the decoding phase by downsampling (generally it's the same concept as rasterizing).

They also attach the code for downsampling images using ImageIO & CoreGraphic

Sample downsampling implementation code from WWDC 18

As usual, let’s copy-paste those sample codes into our project.

Uh-oh, the app crashed and TLDR the error log said URL shouldn’t be on the main thread and suggested running it with asynchronous URLSession.

🤯 🤯

#4 Don’t be afraid to read the official Docs

Apple’s Developer Documentation is the only source of truth you should go to first. Don’t be afraid to spend a second skimming it. Personally, it overwhelms me too but you’ll get used to it over time.

a bible for iOS Software Engineer

According to sample docs, those functions need CGImageSource, and to create CGImageSource it needs CFURL. CFURL is only capable of referencing a local resource. *CMIIW

Trivia: CG stands for CoreGraphic & CF stands for CoreFoundation

It means the image should be accessed locally, but we need an image that is stored remotely on the cloud. So those sample codes from WWDC18 could do what we wished for 😥.

#5 Replicate the process, not the raw-code

As an engineer, we copy-paste code from StackOverflow for a living. But what would happen if we don’t understand what, how, and why the code works? A good code is code that works consistently and is easy to modify. To do so, you must understand how each component of your app works and how it interacts with the others.

When you write code good code, you must understand why each keyword and character exists. This improves your technical sophistication and makes you a better software engineer. Less copying and pasting means more critical thinking, right?

So far, as we know the sample codes from WWDC18 didn’t work for remote images. However, we have a solid understanding of why, what, and how we should proceed. In our case, is to downsample the image and draw it on-screen based on the image view’s frame size.

At the first glance, the thing that comes first to mind is to iterate for each region of image (*stride in computer vision terms) of the image and calculate the average pixel value for each then map it into the smaller image. If you’re having a hard time understanding my gibberish, look at the illustration below

An Illustration Image downsampling pixel by pixel; src: https://www.researchgate.net/figure/Pre-processing-downsampling-applied-before-using-pixel-values-of-images-We-select-the_fig3_304617539

Long story short after experimenting several times, I’ve found the fastest way (and the shortest line of code) to downsample an image. Fortunately, we don’t have to implement the algorithm above which is automatically covered by UIGraphicsImageRenderer. I write the code below as an extension to UIImage for my personal preference on concern separation.

pls don’t just copy-paste the code 🫣
WHOAAA LOOK AT THE MEMORY SIZE. WE DID IT 🥳 🎉

So let’s recap what progress we’ve been made

A recap of the progress we’ve made

But waittt… do you notice something wrong? At first, I didn’t notice about it but look closely between the original image & the downsampled version

IT'S BEEN STRETCHED 👉😂👈

The downsampled version, aspect ratio of the image has been stretched due to its container frame size (UIImageView). and the right version keeps the original image aspect ratio and is cropped by .scaleAspectFill. To be simplified, we need to retain the aspect ratio of the image.

#6 Pen and paper are your best friend

After we identified the problem, we were halfway there so let's dive into how aspectFill works.

An illustration of the difference between aspect fill and aspect fit

After you saw the image above It isn't complicated as it seems before, isn’t it?. with a bit of high-school math you’ll be able to achieve what you wished for.

After spending a few hours sketching a geometrical approach, I felt stuck on making the calculation simpler but someone already solved it on StackOverflow. (my bad for not checking the StackOverflow first)

here’s how it look when you implemented it as a code.

Thank’s StackOverflow 🧡
the image keeps its aspect ratio but the memory usage is increasing a bit but still on the acceptable level

Wrap up

We’ve been working so many things, let’s recap our journey on optimizing our app

such a long way to improve the little things and have HUGE impact

By breaking down the problem, potential solution, and impact; we manage to declutter our thoughts and make them less overwhelming isn’t it?

and by downsampling the image, we manage to reduce the memory footprint from 750MB to 37MB for 8 Images and 100MB to 35MB for a single image! Isn’t That impressive?

also by observation, maybe the space complexity was reduced from O(n) to O(log n). *CMIIW

Our app now eats less memory, is more responsive, has less possibility of being killed on background, and consumes less power. Other apps running on the same device will have more memory to work with, and our consumers will be happier as a result.

Thanks for reading up until this line, I hoped that you guys enjoyed this little article, and learn something from it. I would love to hear your thoughts on the topic of iOS Development or software engineering in general and feel free to correct me if you found something misleading or wrong since I’m new to iOS Development.

Until next time, Fahri.

© 2022 Fahri Novaldi, Hayo! Development Team. All Rights Reserved. Images are available under the Creative Commons Attribution 4.0 International License.

--

--