The HaTS — Happiness Tracking Survey

Prasanta Kumar Mohanty
5 min readAug 11, 2021

--

Happiness Tracking Survey

Customer satisfaction — once a buzzword, now a standard. Not to measure it, is just like asking your clients to unsubscribe from your service. Customer satisfaction became one of the most important factors deciding whether a particular company is successful or not. That is why tracking customer satisfaction metrics is crucial.

First, and foremost happy customers are loyal customers. Customers tend to return when the service or a product they purchased was satisfactory.

In the past, we used many methods to capture customer satisfaction metrics, mostly collected in the context of products with a set of questions. These samplings were not random and do not target specific improvements. Some of them I am aware of.

Different customer survey measurements

One of the most popular methods that a large majority of companies use is the Net Promoter Score (NPS), which gives the product a score based on the customer’s likeliness to recommend that product or service to a friend. This score has become widely adopted because of its lightweight nature, being only one question on an 11-point scale(0 -10) that is easy for customers to understand.

By using this kind of scale, it is possible to classify customers into 3 categories:

  • Detractors — those who rated their likelihood of 6 or lower. You can tell they are not particularly interested in spreading positive words about your services.
  • Passives — those who gave a score of 7 or 8. They could be willing to recommend your products, but also wouldn’t hesitate to switch a brand.
  • Promoters — those who gave a score of 9 or 10. Are your brand’s evangelists, as they repeat some interaction with your product or service and would happily recommend it further.

The NPS has many flaws, most importantly that it does not allow the product team to understand where the product is breaking down or succeeding, and track customer satisfaction when the product changes.

This is where HaTS becomes a terrific tool which “collect attitudinal data at a large scale directly in the product and over time

What is HaTS?

Happiness Tracking Surveys (HaTS), is designed for ongoing tracking of user attitudes and experiences within the context of real-world product usage at a large scale. HaTS represents an optimized approach to data collection, sampling, and analysis.

This is designed to track some of the below metrics

  • overall satisfaction
  • likelihood to recommend
  • perceived frustrations
  • attitudes towards common product attributes

Another goal of HaTS is to segment customers to identify how different subsets of the user base use and perceive the product. This allows the team to follow up with certain customers to figure out their frustrations and intentions when using the product and why that varies from other types of customers.

How are HaTS implemented?

HaTS depends on random sampling for a customer, some of the implementation strategies are

  • The same customer is not repeated before 12 weeks to avoid survey fatigue
  • The aim is to take a survey of a minimum of 400 to maximum 1000 responses, so it reduces the error margin and gives maximum confidence
  • Survey mode should be always web
  • No pop-up invitation before the survey begins as it distracts the customer

HaTS follows a funnel approach, meaning the questions go from broad to specific, to avoid order effects. This is achieved by breaking the survey into three distinct parts:

  • Starts off broad and high-level with a more specific and personal question/s. Asking about the product as a whole helps build rapport with the user.
  • After the high-level attributes have been assessed the more common product attributes/features can be assessed.
  • Ask questions about the respondent’s characteristics which may be more sensitive.

Sample Survey Questions

1. Overall Product Satisfaction and Likelihood to Recommend

Overall Product Satisfaction and Likelihood to Recommend

A 7 point scale is used to optimize the validity and relaibility , while minimizing the respodents effort.The scale is fully labeled without the use of any numbers to ensure respondents entirely focus on the meaning of the answer options. Note the user of “Neither satisfied nor dissatisfied” instead of “Neutral” as the midpoint, to minimize the e
ect of satisficing

The recommendation question is follow NPS style 11 point scoring

2. Open-ended Frustrations and Areas of Appreciation

These aim to gather frustrations, missing capabilities, and favorite aspects of the product. They combine frustrations and missing capabilities because, through their experimentation, they found increased response quality when grouped rather than when asking two separate questions. These questions are optional to ensure customers don’t perceive the questions to be too much effort and drop off. Through their experiments, doing so made the response quality increase.

3. Satisfaction with Common Attributes and Product-specific Tasks

Continuing down the funnel, HaTS asks a matrix rating question focusing on different attributes of the user experience. These included perceived “ease of use,” “technical reliability,” “visual appeal,” and “speed.” Tracking these attributes allows the team to pinpoint aspects of the user experience affecting the satisfaction with the overall product.

Now HaTS focuses on getting feedback on specific tasks

On a conclusion note HaTS question order, question text, response scales, as well as several other questionnaire design details follow best practices based on extensive academic experimentation to optimize data validity and reliability. Even though HaTS is based on random sampling among active users, the self-selected nature of survey participation can result in non-response bias that skews results from the true underlying population values.

References

https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43221.pdf

--

--

Prasanta Kumar Mohanty

Engineering | SRE | Highly distributed scalable architecture