Android App Startup Time Optimizations — Part 1

Doron Kakuli
Chegg
Published in
4 min readNov 20, 2022
Photo by Bill Jelen on Unsplash

In this series of articles, I’m going to cover several optimizations that can decrease your app’s startup time significantly.

It all began with Android Vitals alerts

It all began when we started to see some warnings on the Android Vitals performance dashboard, with unfortunate results for the Start-up time — time to initial display (or TTID) of the app.

We had already added some additional customized measurements for our app performance using Firebase performance metrics, to gain more insights.

And here are some of the results we found:

When analyzing the graph, it seemed that the current app startup time had increased by 2 sec approximately, compared to the prior 30 days versions.

So, I started digging into the changes that had been done in the last 30 days that might have caused this deterioration.

Apps architecture

Before I jump into the changes made, let’s describe our application architecture.
We have several apps with multiple Core and UI features, located in a single monorepo.

Prerequisites

Before that issue come up, we did 2 major changes to allow the proper architecture of separation/decoupling between the apps and modules:

  1. Migrating all apps and modules from dagger to Hilt dependency injection.
  2. Separating each app’s config file to module-independent config.

When I started to investigate it, I took into consideration, that those recent changes were perhaps related.

Investigation Process

I did some performance optimizations that I’ll talk about more thoroughly in part 2, but I would like to emphasize that without the following tool, I would never have discovered one of the biggest offenders.

Part 1: Using Profiler with Java/Kotlin Method trace

Using the default behavior of the profiler didn’t help me much to understand the issue. Setting it to Java/Kotlin method trace, actually did the trick:

  1. Open the Run/Debug configuration Dialog.
  2. Select the Profiling tab.
  3. Enable Start this recording on startup.
  4. Choose Java/Kotlin Method trace under CPU activity

5. Clicking on the Profiling icon will start recording automatically the methods stack trace.

Profiling icon

Please not that a recoding of more than 20sec is too heavy for the IDE

This is a general warning. Sometimes, the parsing is failed

For my analysis, the first 10–15 seconds of the app startup time were enough.

Another click on the Main drop-down will show us a nice graph with all the stack trace methods. Each click on a timeline block will open its specific chain of events/methods.

Please note that this screen is a bit heavy to be rendered by the IDE (you’ll face some freeze moments), so my suggestion is to use the right panel (covered next).

Top-down list of the methods running in the main thread (of the current recording)

When reviewing the analysis of the methods, we can see also the parameter of the μs (micro-seconds), which describes it takes for the specific method to finish (including its children).

Continue with drilling down the main thread list of methods, I found that there was a specific method that caused that overhead time: GetCachedOrFailure

getCachedOrFailure take more than 1.2 sec
getCachedOrFailure take more than 1 sec

Finding that this was the issue, enabled us to refactor this method, and parse our config files correctly (I’ll post a dedicated article on reflection vs. Code generation benchmarks). The results were amazing!

Optimizations and results

getCachedOrFaliure takes 0.054 seconds

As you can see we’ve reduced the amount of a single method from approx. 1.2 sec, to 0.054 sec! This is a significant optimization, considering we have multiple config files!

Next steps

In the next part, I’ll share additional optimizations such as lazy initialization, parser optimizations, and Strict mode capabilities.

--

--