Initial Screen Experiments on Mobile Apps
Experiments on the first screen after app launch can be challenging under adverse network conditions
In the Hotels.com™️ iOS app, we occasionally run A/B testing experiments on the first screen. In all experiments, we need to know if the user is assigned to the control or the variant so that we track the corresponding experience. We specify a user’s assignment by making an HTTP call to Apptimizeⓒ, which can return a response in a fraction of a second, but can take longer on slow network connections. Thus, the first screen might load before we get a response from Apptimize, which is the problem we will be tackling in this blog post. On a second launch, this is no longer a problem, since the data is available locally, assuming the Apptimize call was successful.
Criteria for the ideal solution
The ideal solution would cover the following criteria:
- Show the assigned screen to the user as soon as possible
- Accurate tracking
- Friendly UX
- Low code complexity
- Low QA complexity
The solution we implemented was to only show the correct bucket on a second launch, meaning that closing and opening the app won’t be enough to see the correct bucket. The user or the operating system needs to kill the app. In other words, when the Apptimize call takes longer to respond than the screen load, the app has the following flow:
- Launch the app after the experiment is live
- The user sees the control screen
- The Apptimize call returns, letting us know if the user assigned to the control or the variant in the experiment, and that value is cached locally
- The user continues seeing the control screen, even if the user was assigned to the variant
- No tracking is fired
- When the user relaunches the app, they see the correct version as assigned by Apptimize (either control or variant)
- Tracking is fired, indicating in which variant the user is assigned
Let’s check how our solution performs based on the criteria we described above.
Tracking is accurate with this solution because it is only fired when we are certain that the user is assigned to the control or the variant and seeing the corresponding experience.
Firing on the first launch after getting the response from Apptimize would be incorrect. If, for slow networks, we reported what the user sees on first launch (control), this might be wrong because the user might be assigned to the variant and therefore on a second launch, the user would be reported as variant. Consequently, the user would be reported in both buckets and therefore the analytics team wouldn’t be able to conclude anything from the experiment for this user.
Immediately reporting the assignment from Apptimize would also be wrong, because we could report variant when the user was seeing the control and maybe the user would convert while seeing the control. When analysing the experiment’s result, false conclusions could be made for this user.
The user experience is friendly, given that the user doesn’t see any changes while using the app. Making UI changes while using the app could confuse our users.
Low code complexity
The code complexity for this solution is low, given that we only need to cover simple scenarios:
- When the app launches, know in which variant the user is assigned before the screen loads
- When the app launches, know in which variant the user is assigned after the screen loads
Low QA complexity
The QA team also needs to only cover the scenarios described above, so the QA complexity is low as well.
Show the experiment to the user as soon as possible?
From the criteria described for the ideal solution, there is one disadvantage to the implemented solution: the user doesn’t see the experiment as soon as possible. The user or the operating system needs to kill the app and open it again to see the correct variant. This raised a question: what’s the percentage of our users who use the app without killing it? We looked at our data in a period of 30 days:
Total number of users who opened the app after it was killed: 92.26%
Total number of users who opened the app without killing it: 7.74%
In 30 days, 92.26% of our users who opened the app have previously killed it, meaning that if an experiment was made live at least 30 days ago, 92.26% of our users who opened the app were candidates for the experiment. That is a high percentage and it represents a lot of users given the traffic we have in our Hotels.com app. After all, the goal of an experiment is to have enough traffic to conclude if it is successful.
We concluded that showing the correct experience only on the second launch when experiments are on the first screen works well. There is no need to improve it by assigning users more quickly to experiments.