How to Discover Your App’s ‘Aha Moment’

An ‘aha moment’ is the moment a new user first realizes the value in your product. While it is called a ‘moment’, it is more a set of actions that separates users who are likely to retain from those who will churn. For Facebook this is when a new user connects with 7 friends in 10 days. For Slack it is when 2000 messages are sent between a team. Each company has a unique ‘aha moment’ and discovering it is crucial to driving higher retention. Finding the ‘aha moment’ is no easy task, so below is a step-by-step guide to walk you through the process.

Before we start, it is worth noting that while these numbers seem specific, they are exactly the opposite. One user on Facebook may be hooked after adding 3 friends in 10 days, while another may churn even after adding 12 friends in 3 days. The benefit of defining an ‘aha moment’ is that it focuses the entire company around a clear, meaningful North Star. According to Facebook’s former Head of Growth, Chamath Palihapitiya, the team talked about nothing else but ‘7 friends in 10 days’.

Posts by Apptimize and Mode Analytics served as key resources for this piece.

Step 1: Understand Baseline Retention

The first step involves understanding your app’s current baseline retention curves. These charts can be created easily on services such as Mixpanel and Amplitude. Breaking down retention into acquisition cohorts is helpful in reducing noise and allowing you to see recent trends versus the overall averages. Below is sample (completely made up) data for a messaging app.

The data shows that there is a steep dropoff in retention after day 0, with a more gradual level of churn after that. The best way to visualize this information is to graph it over the course of 30 days. Our goal through this exercise is to try to improve these percentages.

Step 2: Create Hypotheses

Now that we understand the baseline retention for our app, the next step is to create hypotheses about what actions or features might impact retention. A good first step is to look at raw data from the most loyal users and compare it to those who churned. What actions did loyal users perform that churned users did not?

In the case of a messaging app, users who retained may perform the following actions early in their lifecycle:

  • Add friends
  • View a message
  • Send a message

Our hypothesis might be that users who add friends, or users who send messages in the first few days after signup would experience higher retention. This seems obvious, but stick with me. Make sure you don’t miss any obvious actions. It might be helpful to brainstorm a list of 20–30 actions that are performed by new users and pick the 2–3 that are most relevant.

Step 3: Test Hypotheses

Based on our hypotheses, we can group users into buckets or behavioral cohorts based on whether or not they performed those actions. Looking at these cohorts will allow us to understand whether or not these actions actually correlated with higher retention.

In the case of our app, we will compare a cohort of users who added at least one friend in the first day after signup to the baseline (our entire userbase).

As can be interpreted from the data, users who added at least one friend experienced slightly higher retention over the course of the first 30 days. The lift was most significant during the first week.

Compare all the key actions against the baseline. If there is no difference versus the baseline, then focus on other metrics.

Step 4: Find the Optimal Frequency of Actions

We now know which actions correlate with retention, but we don’t quite know how many times these actions must be performed for optimal retention. For an action to qualify as an ‘aha moment’ it must represent the tipping point for the majority of your users. This means that:

Most users who performed the action, retained

AND

Most users who retained, performed the action

Our goal is to maximize the shaded area. This can get fairly confusing, so let’s present an example below to better help you understand how this works, and what mistakes to avoid.

The first step is to define retention, which depends largely on your app and preferences. For example, this could be a user that logged in at least 4 times during the fourth week after registration (Week 4 L7 4+).Next, you must define the number of days after registration you will include in the activity count. For example, you could count number of friends added after one session, one day, or one week.

It may be tempting to look at a cohort of users by how many times they performed the frequency of activity, and whether or not they retained, as presented above. This information would lead us to conclude that we should push users to add as many friends as possible because the retention rate is higher. However, by doing this, we would be ignoring the mass of people who still retain but don’t perform this level of activity.

In graphical form below, we can see that while a majority of users who added at least 8 friends retained (large % of gold is shaded below), this ignores a significant population of users who retained but didn’t perform this level of activity (small % of blue is shaded below).

USERS WHO ADDED AT LEAST 8 FRIENDS

Most users who performed the action (added ≥ 8 friends), retained YES

AND

Most users who retained, performed the action (added ≥ 8 friends) NO

On the flip side, only a minority of users who added at least 1 friend retained (small % of gold is shaded below), but of users who retained, a majority added at least one friend (large % of blue is shaded below).

USERS WHO ADDED AT LEAST 1 FRIEND

Most users who performed the action (added ≥ 1 friend), retained NO

AND

Most users who retained, performed the action (added ≥ 1 friend) YES

Our goal is to find the sweet spot that maximizes the shaded region, while minimizing the blue and gold regions. We want both of our statements to be YES. In a perfect world, we would want 100% overlap of the two circles (all users who performed the level of activity, retained, & all users who retained, performed the level of activity), but this is not practical. By adding an additional column of users who retained but didn’t perform the level of activity, we can calculate this easily. This is, once again, sample data.

  • A: Retained but didn’t add at least [X] friends (blue region)
  • B: Retained and added at least [X] friends (shaded region)
  • C: Added at least [X] friends (shaded region + gold region)

Based on the data, the optimal number of friends to add that maximizes the overlap is three.

USERS WHO ADDED AT LEAST 3 FRIENDS

Most users who performed the action (added ≥ 3 friends), retained YES

AND

Most users who retained, performed the action (added ≥ 3 friends) YES

You will want to perform this analysis for each of the metrics that you hypothesized impacts retention. If your company has a data science team it is possible to perform this analysis more quickly using decision tree modeling.

Step 5: Sanity Check with Graphs

In order to visualize the results, it is helpful to create a behavioral cohort (this time with users who added at least 3 friends), and chart the retention. We can see that the retention is much higher than baseline for this group, and also higher than the behavioral cohort that included users who added at least 1 friend.

Step 6: Determine Causation

Although we have identified optimal breakpoints and actions, our analysis is still entirely correlative. In order to determine causality, you must run A/B tests to determine how changes to the product impact truly retention. In our example, we determined that we should aim to get users to add at least 3 friends early in their lifecycle. Hopefully, by doing so, users will experience higher retention.

Some product tests that I would consider running include:

  • Suggest users to add friends earlier in the registration process
  • Make friend suggestions more prominent after registration
  • Add tooltips that point users to add friends during the first few sessions

Each application is different, and there are many different tests that can be run. Ideally it makes sense to run multiple tests before concluding that you have found a true ‘aha moment’. Once you have validated an ‘aha moment’, set it as a north star and focus your team on it!

Good luck!

Sources: Apptimize, Mode Analytics