Don’t let these 3 code patterns crater your app’s performance

Brian Cooley
Product Science AI
Published in
10 min readOct 24, 2023

App performance is a universal challenge for app developers. In today’s world, where there’s an app for everything, companies constantly strive to make their apps stand out. To illustrate the significance of mobile app performance, imagine this scenario: You’re about to miss your flight, rush to gather your belongings, and urgently need to book a ride to the airport. You open a popular ride-sharing app, hoping to make your flight on time. However, the app takes an agonizingly long time to launch, and your chances of catching the flight diminish. As a last resort, you close the app and opt for a competitor’s service, which loads instantly, allowing you to book your ride within seconds. You catch your flight and become a loyal customer of the ride-sharing app that delivers a seamless experience.

This example highlights how good performance can attract and retain users while poor performance can drive them away. It’s essential to ensure that your app provides a frictionless user experience to prevent app abandonment, a common trend in the mobile app market.

Many of us can relate to the frustration of struggling with a slow app when we need something done quickly. For those who prefer data-driven insights, here are some statistics to consider:

  1. Approximately 50% of apps are uninstalled within 30 days of installation (AppsFlyer).
  2. Most apps lose 80% of their mobile users, except for the most popular ones (Andrew Chen).
  3. 70% of mobile app users will abandon an app if it takes too long to load (Think Storage Now).
  4. 84% of app users will abandon an app if it fails twice (Compuware).

In a competitive and crowded mobile app market, continuous availability and a smooth user experience are crucial to minimize user churn. Monitoring and maintaining mobile apps come with unique challenges, such as supporting various devices and optimizing performance for a fast-paced mobile environment. Users expect apps to be highly responsive, making it vital to deliver a seamless experience.

Today, we will dive into three code patterns that can significantly impact app performance. Identifying and addressing them is key to delivering a smooth user experience:

  1. Network request processing not prioritized in the queue: This pattern occurs when a network request executes promptly but faces delays in processing its results on the main thread queue. This can lead to delayed UI updates, affecting user satisfaction. Offloading tasks from the main thread and optimizing resource usage can help mitigate this issue.
  2. Excessive function execution: Sometimes, functions are triggered to execute more times than intended, causing unnecessary performance bottlenecks. Careful management of function execution is necessary to maintain app efficiency.
  3. Delayed loading of media: Waiting too long to load media can result in noticeable lag, frustrating users. Ensuring that media loads promptly can enhance the user experience.

In addressing these patterns, focusing on offloading tasks from the main thread and optimizing resource usage is crucial. Prioritizing performance improvements is key to keeping users engaged and satisfied with your app.

Let’s explore these patterns further.

Pattern #1. Network request processing is not prioritized in the queue.

The first code pattern is when a network request is scheduled and executed, and its results may not be processed in the main thread queue until much later. It is crucial to schedule network requests to execute as early as possible to prevent delays in processes or UI updates that depend on them. However, sometimes, even when a network request is scheduled and carried out promptly, its results are not processed on the main thread queue until much later. This delay is often caused by the main thread queue being occupied with other previously scheduled tasks. Delayed processing of network request results can also lead to postponed UI updates if the view update relies on information from the network request. So here on the diagram, we see that a network request is being executed in Thread 2, but there is a delay in processing the results in the main thread. This could be due to a number of things, but it’s likely due to a busy main thread queue.

The main thread is notoriously busier than any other thread, and some processes that execute in the main thread may be blocking the queue. The image above shows the delay in red between the network request and when its results are processed. As a result of this processing delay, the UI update is also delayed because it depends on the processed results from the network request.

This kind of delay usually results from overloading the main thread. Our primary focus, in terms of solutions, revolves around finding ways to offload tasks from the main thread. The first step is to consider offloading computations that don’t necessarily need to execute in the main thread. For example, moving them to a background thread.

Another situation to address is when you’re working with tab views and inflating multiple layouts that might not be immediately displayed. In such cases, deferring this work until it’s genuinely needed is possible.

Occasionally, delays can be attributed to issues with LiveData, where business logic flows through LiveData, which runs on the main thread. This introduces a bottleneck to your logic. So, aim to complete most of the logic processing off the main thread before reaching a point where you must rely on LiveData to update the UI.

To illustrate the first case of moving tasks to a background thread, there’s a blog post by Gabriel Peal, a significant contributor to the Lottie Android animation library.

Source: Gabriel Peal, Medium

This animation library operates in two phases: one for calculating screen positions and another for drawing. Historically, this work was done on the main thread. A recent release of Lottie moved these calculations to a background thread, resulting in a significant improvement. Although it introduced some complexity because it required adding concurrency locks, the end result greatly benefits users by freeing up the main thread for essential screen updates and user interactions.

Pattern #2. Function or process is triggered too many times

Another code pattern we’ve observed relates to excessively triggering functions or processes. An example of this pattern occurs when the completion of a network request triggers the View model to update the UI multiple times. As shown in the diagram below, two images illustrate two execution paths for UI updates, referred to as UI update 1 and UI update 2.

For ease of visibility, the image above separates the execution paths that lead to UI update 1 and UI update 2. UI update 1 relies on information from the network request and Epoxy controller 1, which constructs the view model for it. In contrast, UI update 2 depends on the same network request as UI update 2, but the second Epoxy controller builds the same view model as Epoxy controller 1 because the same network request initiates both. Consequently, two Epoxy controller slices are responsible for constructing the view model for the upcoming screen. This results in two virtually identical UI updates due to the shared data source.

Users may experience issues like screen refreshes or repeated UI updates when a function or process is triggered excessively, which can negatively impact app performance. This problem can affect various user flows when loading different screens, especially when these flows follow a similar execution pattern for updating their view models. Identifying the function responsible for these duplicate triggers can lead to a relatively quick resolution, significantly enhancing mobile app performance.

Having a view model trigger multiple UI updates is a common occurrence. Other instances involve executing actions in both `onCreate` and `onResume`, leading to redundant calls of functions. Surprisingly, callbacks and listeners that should have been unregistered sometimes get forgotten. Misunderstanding the events that trigger a callback function or the intention of a callback can also result in multiple calls. One prominent example, especially with the Jetpack Compose UI Library, is recompositions in Jetpack Compose. On the right side of the picture below, the sample code demonstrates how recompositions can occur, especially when dealing with dynamic UI elements.

Suppose you’re trying to build a list and want to have a button that shows up once you scroll away from the top. To handle this case, we compute a Boolean called `showButton` based on the `listState.firstVisibleItemIndex`. Note that `listState` is an instance provided by LazyList in Jetpack Compose. In the first code snippet, every time the user scrolls any amount in the LazyList, `showButton` will be recomputed and cause a recomposition for the Composable. Jetpack Compose has a special method called `remember` and `derivedStateOf`, which allows it to avoid recomposing the Composable except for instances where the variable computed via `derivedStateOf` changes value. This emphasizes that a developer’s mental model should match what’s happening in reality. It’s helpful to have a tool that shows you know what’s going on.

Pattern #3. Delayed media loading and processing

The third code pattern is related to delayed media loading and processing. In many popular apps today, images, GIFs, and videos play a significant role, and users have high expectations for app performance. Nobody wants to use an app where they’re constantly waiting for media to load. A common code pattern, especially in media-heavy apps, involves the improper prioritization of media fetching and decoding.

Here’s the typical pattern when loading a screen with media content. First, a network request fetches the necessary data for the next screen. Then, the results of these network requests are processed on the main thread, where the view for the next screen is prepared, as shown in the diagram below.

After the view is ready, another network request is scheduled to fetch the media, such as images, videos, or multimedia content. Once the request is complete, the media must be decoded. Only after decoding is the UI updated to display the media on the screen.

This pattern results in a noticeable delay for users because they have to wait for the media to be fetched and decoded after the view for the next screen is prepared. Instead of following this pattern, we recommend fetching the media parallel to the view preparation on the main thread. This means scheduling the network request to fetch the media simultaneously with view preparation and potentially decoding it in parallel with view preparation. This approach eliminates the delay users experience while waiting for media to load, as everything happens in parallel with view preparation. Consequently, the UI update occurs much faster, and there’s no additional waiting time for media fetching and decoding.

It’s important to initiate the loading process as early as possible, especially for tasks that take a significant amount of time. In the provided demonstration, we showcase loading a video in ExoPlayer as an example. ExoPlayer offers various methods to start playing a video. One approach is to call ‘play()’ on the Player after you have a VideoView available. Another method called ‘prepare’ (or `prepareAsync`) allows you to execute the operation with just the Player without needing a VideoView for display. In the GIF, you can observe that the video at the top has been prepared before the view appears on the screen, while the video controller at the bottom executes the `play` method after the view is ready.

If you observe closely, the video at the bottom lags behind the one at the top by a couple of seconds.

This lag can occur in various scenarios, such as when you have a Main detail view and navigate from the main list view to the detail view. Tutorials often teach you to load images for the detail view after creating the view. However, nothing prevents you from initiating the loading process as soon as the item in the list is selected, allowing you to create a seamless user experience without progress bars or delays. This approach can make your app feel more magical to users, providing instant access to content.

For the three code patterns discussed, let’s recap the impact on user experience and how difficult they are to identify.

Starting with the first code pattern, where network request processing is not prioritized in the queue, this issue significantly impacts user experience. Delayed processing of network request results leads to delayed UI updates, resulting in a visual delay for users. Identifying this pattern can be challenging because many developers rely on libraries to handle network requests and UI updates. Therefore, having tools to monitor and identify such issues is crucial.

Moving on to the second code pattern, where a function or process is triggered too many times, the impact on user experience can vary depending on the specific process. Duplicate behavior can increase a user’s overall waiting time. This code pattern can be tricky to spot, often resulting from misunderstandings or unintended duplications.

Let’s consider the third code pattern, where media loading is delayed. Properly prioritizing media loading can significantly enhance user experience by reducing wait times. Identifying this pattern can be moderately challenging, as it requires careful consideration of when and how media content is loaded.

If you’re interested in tackling challenges like this, join our team! Time is humanity’s most valuable non-renewable resource. Our mission is to help all people in the world stop experiencing delays from software inefficiency.

Stay tuned by following us on LinkedIn.‍

Mobile Developer? Sign up for Product Science’s mobile performance newsletter‍.

Request a Product Science demo today.

--

--