How to conduct Aha-moment (aka Magic moment) analysis without knowledge of statistics or data science!

Abdelrahman Wahba
Egyptian Startup Manual
18 min readMay 10, 2016

An ill-advised but definitely recommended journey into the unknown universe of user behavior data!

Mischief, learning, value, pain and pleasure

Among the progressive and relatively avant-garde activities I’m proud we did in iqraaly is trying to analyze the reasons why people continue using iqraaly (our app for narrated Arabic audiobooks, audio articles and podcasts) for longer times, commonly known as the aha-moment or magic-moment of a tech-product.

The aha-moment is the set of events or certain experience a user has with a product that transforms him/her from a casual user trying out the app into an avid, hardcore, loyal user that will stick with the app until “forever”.

The importance of the aha-moment

The aha-moment sits at the very core of identifying the user retention pattern of a mobile/web product, specifically mid- to long-term retention; and user retention is the defining success factor of any consumer app/tech-product, as it is the main component in calculating an app’s ARPU (Average Recurring Revenue Per User) and ARPPU (Average Recurring Revenue Per Paying User).

How do you run an aha-moment analysis?

A lot of blogs state the importance of identifying the aha-moment of a product. They also might state the identified aha-moments of large products like Facebook, Dropbox, Zynga, Twitter and perhaps others (here). However, nobody usually tells you the methodology to identify a product’s aha-moment, which is what I’m trying to compensate for with this article.

It took us about 10 weeks of work (Mahmoud Salaheldin iqraaly’s Technical Manager back then and myself, the co-founder and General Manager) to figure out the path towards analyzing the aha-moment, in addition to trying to do the analysis itself. This has shown us that a lot of pretty simple insights can be explained in a couple of sentences while disguising a heap of work to reach them; especially when you have a pool of 500,000 app downloads and 4,000,000+ monthly listened minutes to trawl through (back then). Searching through an unknown path for something rather unknown is quite the ride.

When this article was still in review, it was pointed out to me by that from a reader’s point of view, the journey starts in a bit confusing mood, which might turn further readers away. And this would be a shame because it all got clear to this reader in the end. It dawned on me that I’ve written this article in a way that actually shared the states I went through while on the search for iqraaly’s aha moment. So, I thought to mention this upfront: if you feel a bit confused in the beginning, keep on reading; this was how I actually felt during the process, and things got progressively very, very clear… eventually.

Anyway here goes:

Where and how do you start?

The golden rules here are:

  1. When searching for insights, you start with what you know about your product, and work your way through the unknown. Start by stating/reporting/observing the metrics and KPI’s that you are comfortable identifying and then ask questions to which you don’t have answers yet.
  2. Define an initial path of data exploration. Ask yourself (and your team) questions about the users and their behaviour, even if those questions are hypothetical — like “What is the most used feature by the users who stay active for over 1 year?” Or “What should be number of sessions an user does per month before converting into a premium/paying user?” — Those questions are your path of exploring your data. Remember, you explore the data in search for answers to specific questions. If you don’t have questions, you won’t get answers.
  3. Write down the path you are about to take regardless whether you know where it ends or not, because it is very easy to drown in the oceans of data you are about to explore and forget where you were headed in the 1st place.
  4. The aha-moment is very case specific, so you will need to figure out the specific metrics/KPIs that work with your product yourself

Aha-moment analysis pre-requisites:

  • You have to already have some decent amount of your app/product’s data at hand, i.e. 6–12 months of data.
  • And since you want to identify what makes your product stick with users, then who better to observe than those who have been with you “forever”. And this is where you start, and this is how it starts:

What are the steps?

  1. You start by isolating and observing those who have been actively using your product for more than 3–6 months (please note that the ranges mentioned are flexible and not solid rules).
  2. Then you observe the behavior of the “forever” users and compare their behavior along their lifetime on your product with the non-forever users, and try to find something interesting there, like:
  • a distinct behavioral pattern specific to the “forever” users
  • a distinct difference in usage volumes between “forever” and the “other” users
  • a point of divergence in behavior between “forever” and “other” users

Pretty simple huh?

Yeah, but it is one of those massive easier-said-than-done situations. Here is how “simple” it really was in our case, so fasten your seat-belts because this is where the actual ride begins.

The start

Like I’ve said, you need to write down your path; so here is what we wrote down to make sure we didn’t get lost during the process (ironically, we realized we needed to do this after being lost for a while):

Our main hypothesis, which is the general hypothesis of the “aha-moment” analysis, is that there is a difference in behavior between “forever” users and the “others” leading up to a point that makes the “forever” users keep using our product “forever”, and the “others” churn. And, to compare the “forever” users with those who churn we need to do the following:

  • define what is “forever”
  • define who are “forever” users
  • define the points in time where we compare “forever” users with the “others”
  • define the basis of comparison between “forever” users and the “others”, i.e. the attributes to be compared (behavior patterns? app usage patters? demographics? etc.)
  • actually compare the “forever” users with the “others”

Side point: Cohort analysis

The key to this exercise is cohort analysis, i.e. analyzing users in groups defined by the date of adopting the product. If you cannot group your users in cohorts, then there is no point in trying this.

Back to our path.

What is forever?

1st of all, we had to define “forever”, i.e. When do we consider the user to be with us forever?

We decided not to let this be a definition set purely via figments of our imagination or gut feelings, but to base it on observed trends in our data.

So, we did the most granular cohort analysis we could possibly do. We defined cohorts by the day of downloading the app and 1st listening to our product, i.e. the first active use of iqraaly.

So from the 1st of May 2014 until the 30th of April 2015, we divided the users into cohorts based on the day of downloading and actively using our product.

In other words, we defined cohorts on a day-to-day basis, where we have identified how many users of each cohort were still remaining after 120 days or more from 1st listening to iqraaly. See the screenshots below:

cohorts master table

This is the explanation of the above tables:

  • The rows of the table present the cohorts, i.e. the people who have downloaded and actively used our product on a certain date. The cohorts — as you can see — are daily segregated by date of first active use of the application, i.e. the date of 1st listen.
  • The columns contain the number of users of a single cohort still actively using the app, i.e. listening to content, on day-1 through day-120 and beyond.
  • The numbers in the cells are the number of users belonging to the cohort designated by the row, and who have actively used iqraaly on the day denoted by the column, i.e. who have actively opened the app and listened to audio content on it.

For example, the cohort (row) 5/17/2014 showing 12 on (column) Day-6 (i.e. on 5/21/2014), means that 12 users of those who have started using iqraaly (162 in on Day-1) on Day 5/17/2014, have in fact listened to iqraaly on Day-6, i.e. 5/21/2014.

Note that, the total number of any given cohort is the number of users on Day-1, i.e. in this case 162.

As you can imagine, the table containing this large amount of cohort data turned out to be huge and not really revealing in any way. So, our 1st step to try to pull some sense out of this data blob was to try to observe every month on its own. Hence, we created monthly averages of the cohorts, i.e. we took an average of the daily active users of each cohort and clustered them by month, as shown below:

Monthly average of cohort behavior

Upon creating the table above, we realized that the data started making some sense, and we even plotted it into graphs. It was the representation of average cohort size per month and how the number of users per cohort decayed into the future. However, comparing between the data of each month seemed to be a bit tricky as we realized that we should be comparing trends not absolute values. So, we decided to devise a “common denominator” for all the months, by switching from absolute values to percentages — i.e. normalize the above curves.

We defined number of active users on the 1st day of a cohort to be 100% as it corresponds to the total size of the cohort, and the following days are fractions of 100%. This resulted in this table, which makes the cohorts of months much more comparable:

Normalized monthly cohorts made it more comparable

Here is the plot of the above table:

The plot of the normalized averaged cohorts

As obviously, there are 2 very different shapes of graphs here: an exponential decay graph and a step-drop graph. You will also notice that the step-drop graphs represent cohorts before November 2014, while the exponential decay graphs belong all to November 2014 and onwards. An insider information to explain this phenomenon is that we released our 2.0 version of our app early November 2014, which was a radical change in the UX of the app and a lot of other features. According to the data, iqraaly 2.0 has in fact had better effect on short term usage and retention of the app.

This table and graph has been the core of our aha-moment analysis project. Reaching it while exploring the way to it, despite explainable in a matter of minutes, took in fact much more time and effort than that. Naturally, reproducing this graph in the future for new datasets shouldn’t take more time than what you require to say “cat-in-the-hat”.

The next step was to start to define “forever” as stated earlier (long time huh?). I think you realize now why I’ve defined it a golden rule, to write down your path through data :D

By observing the same graph as marked-up below, we realized that there were 2 very interesting points when defining “forever”: Day 80 and Day 50. Around those days there seemed to be some convergence between most (if not all) of the graphs, and also the lines showed some stabilization around Day 80. So, we decided to define forever as Day 80, while defining “semi-forever” as Day-50.

Why did we define a “semi-forever” point?

Well, we were at a point where we actually had no idea what we will find, so we decided to be greedy in our data analysis until we identify the path. The “semi-forever” point turned out to be very interesting later on.

How do we compare “forever” users with the rest?

Now that we defined “forever”, we can isolate the users of each cohort who have been with us until “forever” relative to their respective cohorts, and then compare their behavior with their peers. The question now is: at which point(s) in their lifetime would it make sense to do the said comparison?

This question took us back to the monthly cohort percentages table. After rigorous debates, we decided to look at the points in this table where the number of users “dropped significantly” and similarly across the majority of the months. After lengthy iterations, and more debates to define what a “significant drop” in users means, we decided to define the “significant drop” as a 5%+ drop in user base.

Those are the final points (days) at which we will compare the “forever” users with the “others”

This resulted in us choosing the following days as points of comparing the “forever” users with the others. We also tried justifying our choice by choosing the points to have other significance than just significant drops in user base, in order to give the numbers and insights more meaning. The final points of comparison were as follows:

Summary of significance of each day of comparison

So we went into the data once again to try to observe and compare the forever users with the others on the above mentioned points. While doing the comparison, we realized that the “other” users, i.e. the non-forever users, aren’t one and the same thing; and viewing them this way would definitely skew the data. For example, if someone was about to quit using the product tomorrow, he will probably not be listening to anything today and vice versa for those who were listening today.

So, once again, we needed to define more stuff on more sets of data, in order to compare the “forever”-users with the “others”, who turned out to be another set of segments. Therefore, — and it goes without saying, that this was concluded after long discussions (=ALD) as well — we decided that at each point of comparison, we shall group the users as follows:

Those are the groups whose behavior we will be comparing at each day as per the previous table

Quick Recap

So, now that we have the cohorts lined up for the next step of diving deep in the data, let’s recap by looking at our data search plan:

  • We want to figure out the aha-moment of iqraaly
  • We shall do that by comparing the behavior of the “forever” users with the others at certain points in their lifetime per cohort
  • We defined cohorts of users by day of 1st listen, or 1st active use of iqraaly
  • We averaged the months of cohorts and transformed them into percentages
  • We plotted graphs of the above percentages
  • We defined “forever” users
  • We defined the “other” users
  • We defined points in their lifetime to compare the “forever” users with the “others”
  • Now we need to compare the “behavior” or “attributes” of the “forever” users with the “others” and the defined points of comparison

Now comes the time to decide what is the interesting “behavior” or “attributes” of the users (i.e. which user data) to use in the comparison. And here, it is worth having a quick moment of reflection:

All of the previous steps are relatively generic across products, i.e. you can take the same steps, alter them a little bit based on the timeline and usage of your product and apply them semi-directly. However, all of the previous steps were just laying the ground for the really interesting part of the aha-moment analysis, which is looking at the user behavior.

From this point on, all of the analysis activities and variables are more product specific, and you will need to find out which attributes/variables/actions are relevant to your product, and worth examining.

Enough reflection, and back to the interesting stuff.

After laying the grounds, off to the actual work

So, in the world of iqraaly where listening is the main experience, we put together the following initial list of attributes we wanted to look at from a user-point-of-view, and compare the “forever” users with the “others” based on them:

  • Average number of listens per user — i.e. number of instances of listens (like views on youtube) executed by a single user (short: #listens)
  • Average number of minutes listens per user — i.e. total number of minutes listened to by a single user (short: #minutes)
  • Average episodes listened to per user — i.e. total number of unique episodes listened to by a single user (short: #episodes)
  • Average number of episodes downloaded per user — (short: #downloaded episodes)

Obviously, all of the above attributes are viewed in user context, i.e. on a “per-user” basis. This is because we are trying to examine the actions done and the events experienced by the users; so it makes sense to normalize the analysis on a per-user basis.

Moving along…

As previously planned, we consolidated the following foundation data:

  • Users’ cohorts
  • The points in the users’ lifetime to conduct the comparison
  • The segregation of the users at each point

And we compiled the following table:

Master Table of behavior comparison by cohort

And here is the summary of averages per observed attribute:

The numbers on the column headers are the days of comparison as defined earlier

Once again, it was interesting to watch each month on its own, but we took an average from all of the months to observe the general trend, which resulted in the following tables and graphs:

Average #Listens per user. R80=Users Retained for 80 days. R50=Users Retained for 50 days. R=Users Retained up until this point but churned by the next. C=Users who churned right before this point.
Average unique #episodes listened per user. R80=Users Retained for 80 days. R50=Users Retained for 50 days. R=Users Retained up until this point but churned by the next. C=Users who churned right before this point.
Average #episodes downloaded per user. R80=Users Retained for 80 days. R50=Users Retained for 50 days. R=Users Retained up until this point but churned by the next. C=Users who churned right before this point.

Now, here is where all of the hard work really started paying off. By observing the graphs of the listens, episodes and downloads, you will notice similar trends, where the R80 users actually ramp up their consumption relatively early on and take off into becoming “forever” users. The interesting thing is that the R50 users, who churn after 50 days of actively using iqraaly (!), aren’t that far behind the R80 users in terms of consumption as per the graphs in question.

So this uncovers the 1st interesting observation: part of the “other” users (the R50 group) are relatively very close to becoming “forever” users if we manage to push them over a brink of consumption. As per the graphs in question (#listens, #episodes, #downloaded episodes), the “brink” or the aha-moment can be defined somewhere around Day 42, which has ~169 listens, ~166 episodes, ~328 downloaded episodes. This seemed inconclusive and kind of depressing actually.

However, the depression faded away when we looked closely at the #minutes graph:

Average #Listened Minutes per user. R80=Users Retained for 80 days. R50=Users Retained for 50 days. R=Users Retained up until this point but churned by the next. C=Users who churned right before this point.

As you can see here, the graphs of the R50 and R80 usage defined in minutes are almost identical between Day1 and Day14 of usage, then they start to diverge from D14 onwards. In addition, by Day29, R50 and R80 have consumed a lot of minutes, ~472 and ~618 respectively; yet, R80 users ramp up their consumption and it seems that iqraaly has become among their habits, while R50 not, despite having listened to a full ~472 minutes, i.e. close to 8 hours in a month, which is almost ¾ of an audio book!

Building Hypotheses for the aha-moment

So, it seems that D14 somehow makes the difference between becoming a forever user and churning further on up the road. And the aha-moment can be hypothesized as:

  • iqraaly’s aha-moment is likely to happen after the user listens to ~618 minutes in 29 days, i.e. 10.3 hours per month, i.e. ONE audiobook in a month.
  • Day14 seems to be really the defining moment in a user’s lifetime on iqraaly, and we have to already have increased the daily consumption on Day14, which means that we need to already take action on the users on Day8.
  • Consumption measured in listened minutes seems to increase/drive user retention.

Once again:

“Users listening to 618 minutes or 10.3 hours in 29 days, i.e. roughly 1 audiobook in 1 month, are highly likely to keep using iqraaly forever.”

Concluded action plan and their consequential results

Now here comes the fun part. After all this hard work and eureka moments we’ve experienced, we need to get back to earth, as revelations via data analysis can be very euphoric.

Let’s not forget, that the above “conclusions” are mere observations of user behavior done via data analysis, which means we cannot say that they are validated facts, only new actionable hypotheses that still require validation.

This is very important, and I will rephrase it just in case:

The observations you find via the data analysis activities that you conduct in your “aha-moment” analysis project still require validation by testing and/or experimentation.

In our case, we needed to verify that consumption measured in listened minutes actually drives retention, i.e. when users listen to more minutes, they are more likely to stay active users of iqraaly for a longer period of time. Translating this into metrics can be as follows:

Listener Retention (=1-month, 3-months & 6-months retention) is directly proportional to listened minutes.

Actions speak louder than data analysis and hypothesis

To validate the hypothesis resulting from our aha-moment analysis efforts, we did the following:

  • Define our analytics’ “north-star” as monthly listened minutes

A valuable practice in analytics is to basically define one of your KPI’s or metrics as your “North-Star” (as I’ve learned from Alex Schultz — the growth manager of Facebook in this video). It is even more relevant when it comes to driving your users towards the aha-moment. And from my experience, it made sense to define the North-Star as listened minutes, since this was the metric showing the most prevalent pattern of aha-moment for our product.

This shifted our focus and daily activities towards achieving maximum possible number of minutes per day, while maintaining steady user growth. And hence, we revised our approach to the following:

  • We started setting daily minutes’ targets and achieving them
  • We decided to revise our content and marketing strategy and product roadmap to push users to consume more content, since this will push them to the aha-moment

The audio content we produce, publish and advertise is a mix between short-form audio (news capsules & articles, averaging 5–7 minutes in audio length) and long-form audiobooks (chapter average 30 minutes, and full book versions spanning up to 15–20 hours). Since consumption turned out to be the main driver of long term user retention, and makes users more prone to experience the aha-moment, then it made sense to do the following:

  • Cut back on the short-form audio, unless performing at the top of the content lines
  • Migrate the freed up production capacity to produce more audiobooks that longer and have more potential of being fully consumed as per our historical data

To be more aggressive and hungry for short term results, and knowing that the availability of a large variety of content in a monstrous quantity as in our case (35,000+ produced audio pieces), we decided to even go a step further and steadily push content on the users.

We developed a list of fully produced books and started pushing those books via push notifications on the users, a full book per day.

In addition, we shifted our technical focus from developing new features and new UX to improve discovering, in favor of maintaining uptime and enhancing product response time and overall performance.

This has in fact done us wonders, as per the following graph

Total Monthly Listened Minutes and 1-Month User Retention

The timing of implementing the above strategy was between May and June 2015; and as per the above graph it is clear that there is a jump in listened minutes from March to June at 38% (that’s if we disregard April and May since there was a lot of down-time and server migration which heavily affected performance, otherwise the increase would have been considered around 87%).

Additionally, if we observe the 1-month-retention graph, we can see that it strongly correlates with the trend of the total listened minutes per month, i.e. it went up and down following the same pattern as the listened minutes.

So, by natural experimentation, the observed increase in consumption and correlating user retention, we managed to somewhat confirm the hypothesis earlier raised by the aha-moment analysis, that “retention is driven by consumption”.

Obviously, deeper analysis and hypothesis testing can confirm or negate whether the aha-moment of 618 minutes per user in 29 days converts a casual user into a “forever”-user, which is yet to be performed.

I don’t know how to actually conclude this study. However, I welcome everyone’s thoughts. Further analysis maybe? Looking into the demographics of the “forever” users? looking into the acquisition channels that bring higher-lifetime-”forever”users?

Acknowledgements:

People who made this happen: Mahmoud Salaheldin — iqraaly’s former Technical Manager, and the one who did all the digging into our analytics database to uncover all of the above insights. Elmalkey who (probably) helped Mahmoud out by carrying the operational responsibility of iqraaly’s back-end during this process, and is currently carrying iqraaly’s operationally achieving the awesome results of Sep-Oct-Nov-2015, and counting.

People who have reviewed the draft and provided their invaluable comments whom I would like to mention and sincerely thank: Hussein Mohieldin, Mona Shokrof

--

--

Abdelrahman Wahba
Egyptian Startup Manual

aka Abdo, Tech Entrepreneur, Director, Seasoned Product Manager, Business Development Pro, striving towards a BS-free, eclectic & expressive non-mainstream blog