Measuring user engagement

This is part 2 in a series of articles about measuring Key Experience Indicators (KEIs). In this series I go deeper into the Google HEART framework for large-scale data analysis. The framework was put in place to help choose and define appropriate metrics that reflect both the quality of user experience and the goals of your product. Each article in the series discusses one of the HEART dimensions — Happiness, Engagement, Adoption, Retention, and Task success. Enjoy and use it!


What is engagement?

In the context of products and services, engagement is the level of user involvement with a product. The term normally refers to the frequency, intensity, or depth of interaction between user and product, feature, or service over a given period of time. User engagement is an unbiased behavioral measurement and is therefore trustworthy, valid, and reliable.

Why measure engagement?

Understanding people’s engagement behavior toward a thing (feature, service, process, etc.) is revealing how much that thing is meeting a true human need and how much people are hooked into using it. Different engagement metrics can reveal not just frequency of usage, but also depth and volume of usage that can be more indicative of great value, product, and user experience.

Key mistakes in measuring engagement

Only measuring overall engagement. While overall product engagement is an extremely useful metric to track, it is a business rather than experience metric. Compare, “Our overall 7-day active went down from 64% in June to 48% last month” to “7-day active for transacting with us remained flat at 23% while time between submitting product reviews per user went down 27% last month compared to June.” The former is interesting yet not extremely actionable, while the latter is specific, clear, and driving action.

Reporting total count. When you report a total count of users who use the product, that number grows usally due to having more users, not more usage. Although many people and product teams are interested in this number, it is a vanity metric that doesn’t say much about engagement, product health, or growth. It is generally more useful to report engagement metrics as an average per user, rather than as a total count. Compare “326,764 overall transactions per week" to “7.8 transactions per user per week.” The former is good for press releases, and the latter is more meaningful.

Measuring everything. Products and services may have dozens, hundreds, and sometimes even thousands of different features. Taking the approach of “Let’s measure everything and let the data tell us what’s happening” is ineffective and wasteful. It will overwhelm you, your team, and executives. You will have 26 different dashboards and find yourself staring at them purposelessly. Then you’ll hire a team to do that for you. Instead, have more intent. Identify the core, most valuable, revenue-driving, or satisfaction-driving features and track their user engagement. Focusing your attention on just a few critical features helps everyone understand what is the experience and where most efforts should be directed.

Thinking engagement for business products is very different than for consumer products. The primary difference between business and consumer products is choice. In most cases, consumers can choose which product to use while employees can’t. While there’s a grain of truth here, when employees are forced to use a crappy product, they limit engagement to the minimum necessary, avoid engaging deeply, and do a lot to find bypasses. This behavior can be reflected in engagement metrics and supported by happiness measurements.

Three engagement metrics

Percent of 1, 7, or 30-day active users per feature: The percentage of users who used a certain product feature (out of all users who had the opportunity to use it) during a given period of time (usually, 1, 7, or 30 days). This means that “1-day active users” are users who used a feature during the last day, “7-day active users” are the users who used it during the last seven days, etc. Some organizations call this metric L7 (short for, ‘last 7 days’).

Mean number of [key action] per user: The average number a user is performing a key action with the product or service. For example, “On average, users scheduled 3.2 meetings with realtors last week” or “On average, each user ordered 5.8 different products last month” or “On average, a user logged 67.5 miles of running through our app last month.” These numbers will be meaningful only if you focus on actions that are core to the experience. Don’t get tempted to measure this metric for every single small action with the product.

Mean time between [key action] per user: Another actionable way of examining engagement is by tracking time that passes between visits to or usage of a specific key feature or service. The ultimate goal would be to reduce that time and the assumption is that when mean time between visits per user is reduced, the feature provides more tangible value to users. Example: “We’ve reduces the time between transactions from an average of 5.1 to 4.3 per user in the past quarter.”

Taking action on engagement data

User engagement data is extremely actionable. Here are some examples for actions you can take with it:

  1. Prioritize work. When certain features demonstrate high engagement, your users are signaling that these features are valuable. This means you should consider developing these features more to make them even more valuable.
  2. Sunset features. The other side of the scale are features that have low or even zero engagement. It probably means it’s time to stop investing in them and maybe put them to eternal sleep. Less is more.
  3. Inform split tests. Split tests are helpful in choosing between different variations of a feature or element. Engagement data helps in focusing these experiments on product areas that matter to your users.

When users are deeply engaged with key product features, it means you have reached product/market fit. User engagement metrics provide you with the most meaningful signals from your audience. They reveal truths you may not have been aware of and are an indication of poor, stale, or excellent user experience.

Other articles in this series

Key Experience Indicators: How to decide what to measure?
Measuring user happiness
Measuring user adoption
Measuring user retention
Measuring user task success