Are you consuming a balanced diet of customer insights?
Making customer experience decisions based on the most convenient customer insights is like living on a diet of bread because you live next to a wheat field
The adage “You are what you eat” is a reminder that indulgence of one food group at the expense of others is unhealthy.
The importance of the quality and variety of foods that we consume is evident given the energy and investment in nutrition and dietary science.
The same can be said information; we are only as good as the information we consume. This is especially true when you’re trying to understand your customers so you can develop a better experience for them.
Our decisions — more specifically, the quality of our decisions — is determined by the qualities of the information consumed.
Information professionals, whose performance is measured based on decisions they make, should be concerned about the quality of information they consume just as atheletes recognize the importance of diet and nutrition on their performance.
However decisions are often made based on information because it is already available or it can be easily acquired without considering aspects such as accuracy, motive and method.
One of the most important factors influencing these aspects is the source of the information.
Multiple sources reduce the risk of inaccuracy
One of the most memorable (and provocative) lessons bestowed upon students in Professor John Bremner’s journalism class was the following:
“If your mother says she loves you, check it out.”
Professor Bremner’s sage advice was to be wary of a single source as the truth.
To paraphrase: if, in the process of gathering information, someone tells you something of importance, apply a dose of healthy suspicion and consult another independent source.
This is practice is sensible and applies beyond journalism. In fields of science, business, and in most parts of life, accuracy is paramount to make good decisions.
This tenant certainly applies in the field of customer research, which serves product teams that design and develop great customer experiences for products and services.
Yet, consider how often product experience decisions are made based on a single source of information.
Teams are often encouraged to react quickly to criticism or suggestions from customers from surveys, interviews or website feedback with new features or design changes. Likewise, teams often engage with customers in the field and develop a sense of security with a product, concept or prototype when a customer tells them that they love it.
Indeed, a single source may seem better than no source, but that is assuming the source is accurate and represents the truth.
If teams were exercising healthy skepticism, they might hear a version of Professor Bremner’s booming voice saying “If your customer says she loves your product, check it out.”
Embrace diverse and unfamiliar sources of information
We like to hear people tell us positive things, but criticism is where the value is. Negative feedback is more likely to lead to improvements than positive feedback.
The preference for positive feedback is one of several natural tendencies that tend to challenge organizations seeking the truth through research.
The tendency to seek verification is a close relative and more commonly known as confirmation bias. Organizations are more aware of this challenge, but it is still powerful and insidious.
Technology and the increased demand for customer research are driving the challenge as more people with less experience and formal training are performing research. Amateurs are more likely to overlook or be unaware of confirmation bias in their own work.
Technology is putting more research tools into the the hands of more and more people. The accessibilty of these tools equally empowers professionals and dilettantes. And this brings us to the third natural tendency which Maslow famously observed: when the only tool you have is a hammer, you tend to see everything as a nail.
As organizations become familiar with research tools, there’s a tendency to binge on the research tool de jour rather than recognize that each tool has strengths and weaknesses. Teams tend to go through phases, over-using certain methods such as focus groups, surveys, usability testing, A/B testing, customer journey mapping, etc.
In the hands of a master craftsman, the right tool is used for the right problem. Likewise, customer research professionals possess a full toolbox of research tools and methods and are most qualified to lead the decision of which tool or combination of tools is appropriate for the business question.
Sometimes it is necessary to triangulate to find the truth.
Paradigms of Customer Insight
That’s why our team developed a bit more structure around our research function to encourage more checks. We didn’t do this for every research question, but for the most important questions, we would look for common patterns across different types of customer insight.
I like to describe the three paradigms of insight as Voice of the Customer, Behavior of the Customer and Mass Audience Metrics. Voice of the Customer refers to any stated sentiment or perception of the customer. Behavior of the customer refers to the observed actions of the user. And Mass Audience Metrics represents knowledge inferred from the aggregation of data across the entire population of users.
One of the most obvious reasons why it’s crucial to look at more than one type is because of the well established fact that what people say and what they do are not always the same. In fact, the field of behavioral economics is predicated on the fact that humans are poor at predicting how they will behave in certain situations.
This should be a red flag for any business teams that put all their eggs in the “Voice of the Customer” basket. Beware! Your customers may tell you that they love your product, or they may say they wouldn’t use a certain feature. But have you taken the time to watch users interact with the product or looked at metrics?
A look at another source of customer insight may reveal a different, more complicated reality and allow a product team to make better decisions.
Using Customer Insights to manage Customer Experience
These types of research also align with aspects of the Customer Experience that we aspire to evaluate, measure in order to manage the Experience for our users.
For instance, do users see value in the product? The exposed value is a critical aspect of the experience that is measured in the eyes of the customer. If the service or capability does not provide value to the customer, or if the value is present but is not recognized by the customer, then other aspects of the experience are irrelevant. A great way to discover if a product is delivering value is to ask users for their opinion. In this case, perception is reality.
Another aspect of experience is the ease of use, which can be evaluated by observing the effort it takes for customers to use the product. Usability testing is the classic tool for assessing effort through completion rates and time on task. Observations often identify impediments that affect the user’s experience and lead the way to possible improvements.
Engagement is another powerful and multi-faceted aspect of the user’s experience that can be inferred through the metrics captured from user interactions with the site. From the web logs, we can mine the number of visitors, the repeat users and rate of re-visiting. We can calculate abandon rates and track users’ journeys click by click. And much much more.
These three aspects of the experience also align with the types of customer insight that we track.
The Challenge for teams
This model for research is based more on how the research is actually used in practice.
However, this is not congruent with traditional models which organize the research based on characteristics such as qualitative vs quantitative or whether the customer insights were solicited or unsolicited.
Moreover, this model does not represent the way businesses manage the research. In fact, these three customer research insights are often distributed among several departments in an organization. This division creates natural boundaries to the tools that are used to evaluate and measure experience. It also encourages myopia within each group as teams develop a comfort with their source of insight.
For instance, NPS and other customer satisfaction surveys are frequently managed by a group that is dedicated to such customer research. Meanwhile, usability testing is often performed by the researchers in the UX design team. And the web metrics and log analysis is often a third team in a different department.
Enlightened teams must work across teams and departments in order to gather evidence from more than one type of research.
The importance to leverage customer insights when making important business or product decsions is well established. But too often, teams settle for the most convenient customer insights and are less scrutinizing about those insights.
The most important decisions deserve to be based on the best information. To ensure the information is accurate, consider assessing more than one source for the information. Also consider gathering insights from multiple research paradigms. And beware of confirmation bias and the tendency to favor convenient or familiar methods and tools. Apply healthy skepticism and triangulate multiple types of data.
Once you have good information about your audience, you can leverage it to drive decisions to improve the experience for customers.
Good customer experiences can be evaluated based on three attributes: 1) how well they expose value to customers, 2) how much effort they require of their customers, and 3) the nature of the engagement with their customers.
These three attributes can be measured using methods that focus on the stated perceptions of customers, the observed behaviors of customers and infer relationships through captured interactions of the audience.