What affects performance of the iOS app? How to build beautiful UI and stay responsive? Why a table view or a collection view scrolling can be slow?
Performance is a complex problem. Rendering is one of components often overlooked and yet easy to improve.
In this article I want to overview strategies for improving rendering performance of iOS, UIKit apps.
When saying UIKit apps I want to emphasize focus on native apps built using
UITableView, etc. E-commerce, educational, productivity, and other categories. If you’re not using UIKit directly, use hybrid technology, or building a specialized graphic app, you may find limited application for techniques described in the article. But please still continue to read and hope you enjoy it 😊
Before we jump into improvement topics, I want to describe how rendering process works and what cause frame drops.
Displays of iPhone and iPad update at 60 Hz refresh rate. Displays of the latest iPad Pro’s capable of 120 Hz. Apple TV can match refresh rate of a TV or a movie it plays.
The display with 60 Hz refresh rate will refresh 60 times per second. This is the constant number. Each time the cycle repeats:
- Convert a bitmap from the framebuffer into a video signal;
- Light physical pixels of the display.
The framebuffer is a special reserved memory that stores a bitmap representation of the content on the display. It serves as the input for the display and the output of the rendering process.
The app must be able to render frames at the frequency of the display. The frequency of 60 Hz means 60 FPS for the app, ~16,67 ms to render a frame.
Note, this doesn’t necessary mean that the app renders 60 times per second. When the content does not change there is no need to re-render it.
The Render Server
The Render Server is a separate process that issues drawing calls for the GPU using OpenGL or Metal.
Core Animation batches changes into a transaction, encodes it, and commits to the Render Server.
We talked about the framebuffer, that stores a bitmap on the screen. And rendering as the process of creating a bitmap.
When rendering offscreen we introduce and render into offscreen buffer.
Offscreen rendering is necessary for some effects. But in general we want to minimize it. Why? First, we are rendering more pixels. Second, we take offscreen buffer and render it to the framebuffer. Third, offscreen rendering gives more idle time for GPU, because it must context switch between the framebuffer to offscreen buffer.
To run at 60 FPS the app has ~16,67 ms to render a frame. Let’s break down scrolling over the content represented by 4 frames.
We have the timeline split in 16,67 ms intervals. The rendering step for the frame comes before the display step.
| 16,67ms | 16,67ms | 16,67ms | 16,67ms | 16,67ms |
Expected | | Frame 1 | Frame 2 | Frame 3 | Frame 4 |
Display | | Frame 1 | Frame 1 | Frame 2 | Frame 4 |
Render | Frame 1 | Frame 2 | Frame 2 | Frame 4 | |
- Frame 1 is rendered in less than 16,67 ms.
- Frame 2 takes two intervals to render. Frame 1 is in the framebuffer, so it is the one displayed.
- Frame 2 is finally rendered and displayed. At this moment user has scrolled past Frame 3 so it is dropped. The next frame to render is Frame 4.
As you can see when one frame takes longer to render, previous frame is displayed in its place and next frames are dropped.
From what we discussed by now, we want our frames to render at or faster than the display frequency and we want to minimize offscreen rendering.
We must understand that besides rendering frames, the app also has additional work to perform, such as: handle user interactions; process system events; access databases and files (including localized strings and assets); perform coordination and business logic; etc, etc. And the less and more efficiently you deal with this work, reflects on rendering performance and frame drops.
We can segment rendering improvement strategies into this categories:
- Preparing content;
- Utilize the framework;
- Using alternative approach;
- Reducing graphical complexity.
Using this strategies will reduce amount of work the the framework must perform when rendering.
Preparing content is an important step to improve rendering performance. Removing steps such as scaling and converting color format gives noticible performance boost.
When creating assets to bundle with the app make sure images are in correct size and color space. Remove alpha channel whenever possible to avoid layers blending (two or more layers combined to yield a composite graphic). Discuss this with your designer if needed.
This is bit more tricky when dealing with images you can not prepare, often the one downloaded from network. Take advantage of multithreading by performing scaling as part of your fetch operation. Frameworks such as Alamofire often include
UIImage extensions to scale the image. And you can easily write one using
Utilize the framework
UIKit and Core Animation are build to solve variety of tasks and that requires a developer to tailor concrete solution.
UILabel as example. Default
isOpaque set to
false. This way labels are able to display text over any background. This is what we want in most cases. But at the same time, this creates unwanted layers blending. Extra work we can avoid by setting
backgroundColor to superview’s color.
Another example is shadows and
CALayer. When possible, provide
shadowPath to avoid offscreen rendering.
Using alternative approach
Sometimes we can use alternative approach to achieve the same look.
Take custom drawing as example. When overriding
draw(_:) method UIKit creates and configures drawing environment. Additional work that cause memory allocation. Providing existing content is more performant, rendering an image is faster. For instance, overriding
draw(_:) to display one pixel line or custom background can be replaced by one pixel image with properly configured slicing.
In general, provide content (i.e. a bitmap) instead of implementing custom drawing.
Reducing graphical complexity
I left this to the end because reducing graphical complexity is a compromise between visual appeal and performance. For some cases UIKit reduces visual effects because the hardware is not performant enough. For example
UIVisualEffect, blur and vibrancy effects work only on fast enough devices. You can also utilize this strategy using empirical approach to find sweet spot in performance on various devices.
Hope you find this article useful. In further articles I plan to touch specific cases and profiling topic, so stay tuned.
To better understand rendering and graphics performance I recommend watching this WWDC sessions: