16 Key Experience Indicators: Your UX needle

Start measuring the experience. Now.

Tomer Sharon
Agile Insider
7 min readOct 5, 2018

--

Key Experience Indicators (KEIs) provide a quantitative score of a specific, important, and actionable phenomenon related to using a product or service.

Measuring KEIs has the following benefits:

  1. Provide information to decision makers.
  2. Precede or predict business outcomes.
  3. Get insights into qualitative findings and customer anecdotes.
  4. Identify strengths and weaknesses of a product.
  5. Understand the effect of a change in the product.
  6. Understand the value of a product.
  7. Evaluate the user experience of a product.
  8. Get indications for reaching product/market fit.
  9. Provide a baseline to improve from.

The following are 16 KEIs to start measuring when you want to score the experience of your users.

aNPS

Rather than asking your users to predict their future behavior using the NPS question and bizarre calculations, ask them about actual behavior they have demonstrated recently. The aNPS (stands for actual NPS) question is, “In the last week, have you recommended us to someone?” the answer options are Yes and No, and the score is the percentage of people who answered Yes.

% satisfied users

The percentage of users who indicated they are happy, satisfied, or delighted with a feature, product, or service. For example, if you have 100 users who responded to your question, 17 of them were unhappy, 24 were undecided, and 59 indicated they are happy, then your score is 59% (100–17–24=59). Your goal would be to increase this number.

Satisfaction score

The average satisfaction score considering all ratings. Continuing the example, your mean satisfaction score for your 100 users would be calculated this way: a happy rating gets 1 point, an undecided rating gets 1/2 point, and an unhappy rating gets zero points. Aggregating the points (17x0+24x0.5+59x1) gives you an average satisfaction score of 71%.

Tip: Please, please, please, be more sophisticated and calculate basic descriptive statistics to figure out the confidence interval. That would give you a more accurate description of your score.

7-day active

The percentage of users who used a certain product feature (out of all users who had the opportunity to use it) during a given period of time (usually, 1, 7, or 30 days). This means that “1-day active” are users who used a feature during the last day, “7-day active” are the users who used it during the last seven days, etc. Some organizations call this metric L7 (short for, ‘last 7 days’).

# of [action] per user

The average number a user is performing a key action with the product or service. For example, “On average, users scheduled 3.2 meetings with realtors last week” or “On average, each user ordered 5.8 different products last month” or “On average, a user logged 67.5 miles of running through the app last month.”

Time between [action] per user

Time that passes between visits to or usage of a specific key feature or service. The ultimate goal would be to reduce that time and the assumption is that when mean time between visits per user is reduced, the feature provides more tangible value to users. Example: “We’ve reduces the time between transactions from an average of 5.1 to 4.3 per user in the past quarter.”

Adoption rate

The percentage of new users of a feature. The formula for calculating adoption rate is: Adoption rate = number of new users / total number of users. For example, if you have a total of 1,000 users, of which 250 are new, then your adoption rate is 25% (250/1,000). The adoption rate should always be calculated for a specific time period. For example, if you calculate an adoption rate for the month of July, you would use the total number of users who used the feature for the first time any day between July 1 and 31. You will then divide that number by the total number of users on July 31, the last day of the month.

Time to 1st [action]

The mean time it takes a new user to try an existing feature, or an existing user to try a new feature for the first time. That time can be associated with understanding the value of the feature, getting curious about it because of its name and promise, or context that makes the feature an attraction. For example:

  • Time to first click a navigation item from when a user opened the homepage is 4.7 seconds.
  • Time to first usage of a hotel concierge service from checking-in time is 16.5 hours.
  • Time to first transaction on an eCommerce website from when an account is first created is 21 days.

I recommend you identify key actions with the product or service first and not measure this metric for every single small action that can be completed with the product or service.

% users who performed [action] for the 1st time

A slightly different way to examine the first time experience. What percentage of users have performed an action you care about for the first time in a given time period. For example, “86% of users purchased at least three products through our mobile app in the month of July.”

Retention rate

The percentage of retained users over time. To calculate the retention rate, you need to look at two numbers: The number of users at the beginning of the time frame, and the number of those users who are still users of the product at the end of that time frame. To get the retention rate, divide the former by the latter. For example, if on July 1 you had 100 users, and by August 1, 94 of those users continued to use the product or feature, then your retention rate is 94%. Your churn rate in this example would be 6% (that’s your leaking bucket of 6 users who are no longer with you). To clarify, if during that time period of July 1 to 31, 12 new users started using the product, these are left out of the retention rate calculation for July.

Upgrade rate

Sometimes, product subscriptions, plans, or even features are tiered. An important retention indication is when users choose to upgrade. To calculate the upgrade rate, divide the number of users who upgraded in a given time frame by the total number of users. For example, if during the month of July 12 users chose to upgrade from a tier 1 to tier 2 subscription, and by July 31 there was a total of 100 users of the product (in both tiers), then the upgrade rate for July was 12%.

Time to churn

Another actionable way of examining retention is by tracking the average time that passes between becoming a user of a product, feature, or service and leaving, churning, or downgrading (if relevant). The ultimate goal would be to increase that time and the assumption is that when it is so, the feature is still valuable to users. Example: “In the past year, we’ve increased time to churn from an average of 65 to 88 days.”

Task success rate

The level to which users are able to successfully complete tasks using the product. Failure to complete a task scores 0%, success equals 100%, and there are all kind of states in between to grade partial success. Task success rate is calculated based on the average of all users (in a given time period) for all tasks. For example, if you measure tasks success for tasks A, B, and C and the rates are 70%, 80%, and 100%, respectively, then the overall task success rate is 83% (70+80+100)/3.

Time-on-task

The average amount of time it takes users to complete a given task from the moment they start until they are done. Seems straightforward yet time measurements are complicated. Be sure you are aware and are mitigating common traps such as users trying too hard, measuring time when users fail to complete tasks, normalizing time data, etc. Jeff Sauro has published great short, practical articles about measuring time on task. Read them all.

Lostness

The lostness metric is a measure of efficiency using a digital product. It tells you how lost people are when they use the product. Lostness scores range from zero to one. A high score (closer to one) means that people are very lost and having trouble finding what they need. A low score (closer to zero) means that people find what they want relatively easy. Lostness is calculated using the optimal and actual number of steps it takes a user to complete a task.

Abandonment rate

The percentage of users who abandoned a task prior to completing it. IMPORTANT: Analytics will not get you a reliable number here. If you don’t know what visitors’ motivation is, you cannot assume they abandoned a certain path or process. The way to get a reliable abandonment rate is through controlled environments such as UserZoom where you create motivation for users by giving them tasks to complete.

Bonus KEI: Team empathy score

The percentage of product team member who observed or interviewed at least one user using the product or a prototype in the past two weeks. Once every two weeks, ask team members a simple question: “In the past two weeks, have you observed or interviewed a user using the product?” Provide two options for answers: Yes and No. The empathy score is the percentage of team members who answered Yes.

Conclusion:

Start measuring the experience. Now. Don’t futz with how to create different views of the data based on different segments or additional sophisticated analysis “needs”. Just do it. Start there. You will know so much more about your audience and be able to react to drastic movements of your new needle.

--

--

Tomer Sharon
Agile Insider

Cofounder & CXO at anywell, author of Validating Product Ideas, It's Our Research, & Measuring User Happiness. Ex-Google, Ex-WeWork, Ex-Goldman Sachs. 2∞&→