24 Tips on How to Use Text Analysis to Increase Customer Loyalty

Matti Airas
Jul 27, 2017 · 29 min read

We live in an experience economy: loyalty is not just about delivering an exceptional product but making customers feel good about every aspect of your operations and brand. This is why many companies are putting in place systematic ways to track customer experience.

With survey processes like NPS and CES an ever higher share of customer’s voice is open-ended text comments. Social media discussions and incoming emails and web-forms further complicate the picture. Meanwhile, the feedback volume is increasing.

I have summarised in this blog post the most important lessons that we have learned during the past 6 six years analysing over 200 million customer comments. Read it and you will become an expert in feedback analytics!

This blog post is focused on analysing open-ended customer comments. Most of what we are saying here can also be applied to analysing employees’ open-ended comments.


Tip #1. Identify the customer experience stakeholders and their information requirements

Your job, as a customer experience professional, is to provide the rest of the organisation with valuable information. This requires identifying who the customer experience stakeholders are, and what are their information requirements. Also knowing what factors help them succeed in their work can assist you in designing the optimum feedback gathering, verbatim analysis and insight distribution system.

You cannot improve customer experience without knowing what your customers are talking about (=Topic) and how they feel about you products and services (=Sentiment)

Product managers want to know how customers feel about their product(s)

One of the first persons you should contact are the product managers. This is especially true in online service or web shops, or generally in businesses which have many services or products.

Product managers are interested in knowing how their product is performing. Not just financially but also how customers feel about the product, and what kind of problems they have, and what new features customers would like to get in the future.

Product management report should focus on product attributes.

Process managers are interested in how their touchpoint or process is performing

There are other teams who have a role in the customer experience delivery process. These include process managers, operations quality professionals or functional managers.

You should especially focus on process (touchpoint) managers that are responsible for delivering part of the customer journey.

One of the process managers’ goals is to have a high NPS score or average sentiment. They value information that will enable them to improve the score.

Touchpoint report should be filtered to include only touchpoint specific comments and outline the customer experience attributes related to that part of the customer journey

Sometimes getting the content of the process manager report right takes a couple of iterations. You need to go back and forth with the process managers in order to find the right balance between their top down wishes and your bottom up capabilities (what the feedback data makes possible).

Contact center focus on quantitative analysis. You need to get their data for qualitative analysis.

Contact centers receive text-based feedback in the form of emails, web forms and transactional agent performance surveys. Contact centers have their own understanding of what the customer experience (means to them). Qualitative analysis isn’t on top of this list. They are often focused on the contact center performance metrics (call waiting time, issue resolution time, first contact resolution rate etc.). You need to make sure that you get their data for qualitative analysis.

Once you start delivering regular and relevant CX reports (or dashboards) to the stakeholders, they will become more engaged. Remember to track that they open the reports and use them in their daily, weekly or monthly management cycles. If they don’t open and use the reports, go back to them and find out what is wrong with the reports and why aren’t they using the feedback analysis results for their decision making.


Tip #2. Create a high-volume feedback gathering system

At Etuma, we analysed hundreds of different feedback processes and formats and seen what works and what doesn’t work. For a feedback analysis company, Etuma became surprisingly expert in the process of gathering feedback.

We have learned how to design a survey process that both maximises the volume of open-ended feedback and provides concrete actionable insights.

If I could start from scratch, this is the kind of system I would create.

Make spontaneous feedback giving as easy as possible

Give customers the possibility to choose the channel they prefer. This includes text messaging, web forms, twitter, Facebook and email. Remember, the customer chooses the feedback channel, not you.

Run transaction-based surveys for key touchpoints

We like NPS but it can be any format as long as it is short and relevant for the experience (touchpoint) you are tracking. In transactional surveys the two most important things are timing and brevity: conduct the survey soon after the event using sms or email, and keep the survey as short as possible.

Dig deeper when you don’t have enough information

Customer experience management platforms enable you to run sophisticated rule-based surveys. You should use this functionality to find out e.g. why people stopped using a certain product or when their survey response didn’t explain the reason for their reaction.

Run periodic relationship surveys on a representative sample

Spontaneous and touchpoint specific surveys often fail to get a comprehensive view on brand, competitor and marketing related issues. It is important to keep the relationship survey format as similar as possible to the transaction survey (the periodic survey can have more questions). This gives you the ability to analyse customer feedback as a whole.

Connect your company’s and competitors Facebook pages and Twitter handles to the analysis service

More and more of the brand, product and service discussion is moving into social media. You need to connect your main social media channels into the verbatim analysis system.

Don’t try to get an answer to every question in one survey. Create a continuous high-volume communication process, in which the complete picture is formed from many small fragments.


Tip #3. Design and implement customer experience specific database(s)

Extracting actionable insight is difficult. It takes quite bit of work but mostly it requires thinking and planning. One of the most important things you need to do is to design CX databases.

You, a CX professional, need to own this data. Don’t let BI or IT people set restrictions. Making compromises will greatly hinder your ability to do your work well. Good data is paramount!

Design the CX database to fulfill your reporting and analytics requirements

The customer experience management system is not just created for CX stakeholders. You, or a data analyst, need to also analyse the data yourself. This is especially important when analysing the feedback from the customer perspective (Tip #X).

In stakeholder (role-based) reports the hypothesis is known. In data analysis you need to form the hypothesis. Sometimes the hypothesis is not even known like when detecting weak signals. Make sure your database has the right dimensions to make all this possible.

I am not going into the specifics of the actual database dimensions. You know what kind of structure makes sense for your company. The graphic below gives you an idea about what kind of dimensions your CX database should have.

12 cx database design and implementation principles:

  1. Make the response id the data record identifier
  2. Make the customer id the secondary identifier
  3. Date every survey response
  4. Make sure that your primary operational dimension is a background variable
  5. Create a separate database for each survey process
  6. Group data that makes sense to group (products, age)
  7. Name the dimensions so that they are easy to understand in reporting (no codes)
  8. Create one database that has all the feedback (includes data fields that are in common to all survey processes)
  9. Make the transactional survey touchpoint a background variable.
  10. Have a limited number of dimensions (columns)
  11. Keep the database(s) as small as possible: limit the volume of historical information
  12. Structure the text analysis results hierarchically so that you can drill down contextually to the customer comment

Designing and implementing a dedicated customer experience database is definitely worth the effort. Create customer experience databases that fulfill your company’s reporting (or dashboarding) requirements and your actionable insight analysis needs! Don’t tolerate bad data (structure).


Tip #4. Decide how to categorise the feedback

You cannot analyse open-ended customer feedback without categorising it. This categorisation has to be done systematically, relevantly and consistently, and should cover all your feedback sources. Your categorisation system needs to be uniform across the organisation otherwise the feedback cannot be used in top management reporting (Tip #5).

Categorisation turns open-text into statistical information, which enables you to do the following things:

  • Monitor trends;
  • Detect patterns;
  • Benchmark organisational units; and
  • Distribute automatically the actual customer comments in real-time based on the stakeholder (Tip #1) roles.

There are four ways to categorise feedback:

1. Tabulate the feedback manually

If you get only a few open-ended comments, this is a manageable method. With higher volumes this task becomes slow, expensive and the results are inconsistent. Humans can handle about dozen categories. This means that e.g. all weak signals belong to the “other” category.

2. Feature in the CEM platform

Before committing to a CEM vendor’s own text analytics, make sure that their text categorisation service fulfills your quality and granularity requirements. A good way to do this is to ask for a demo.

A good checkpoint is also to whether the CEM company has a team of linguistics experts in their staff. If not, put on extra effort into scrutinising the categorisation accuracy and relevance.

3. 3rd party embedded analytics in the CEM platform

If your CEM vendor doesn’t have a text analytics feature, then the best option is to look whether they have partners that can provide the service as an embedded analytics solution. Etuma has embedded analytics connectors for e.g. Qualtrics , Questback, Salesforce and Zendesk.

4. 3rd party open-ended feedback categorisation service outside the CEM platform

Fourth option is to export all the open-ended comments and the relevant background variables (metadata on the text comment) into a 3rd party analysis service.

There are many companies that do this. Here is a list of 62 text analysis vendors.

Before selecting the vendor, read this blog post about what kind of issues to consider in the vendor validation process.


Tip #5. Create a uniform categorisation system

Everybody knows that loyal customers are more profitable. But it is difficult to monitor and increase loyalty without knowing what the customers are talking about. Once you know how much customers talk about a specific service aspect and what is the sentiment of this discussion, you can start prioritising the customer experience improvement efforts.

Almost all contact centres tabulate all the incoming emails and web forms manually according to a preset classification system. But when it comes to open-ended comments in surveys and social media, those are, at best, distributed to department managers based on metadata, without any kind of categorisation. Customer’s voice is lost in organisational silos.

Uniform categorisation system enables you to report verbatim analysis results in the same way as structured information (like sales figures) is reported. It creates a common language within a company and brings customer(s voice) into the decision making process.

Categorisation system should be designed so that it can be used to categorise all customer feedback. This includes emails and web-forms, NPS and CES comment fields, verbatims in surveys and social media comments.

Here are the most important categorisation system requirements:

  1. Encompassing: Capture all relevant words, phrases and brands from open-ended feedback;
  2. Accurate: “Pool” those words and phrases correctly into topics and detect the sentiment for each topic mentioned. Make sure that your feedback analysis vendor maps a word or phrase depending on the context. A word or phrase can have different meaning depending on the context.
  3. Relevant: The categorisation system needs to be “tuned” into your industry otherwise you might not have the necessary granularity for different organisational departments and roles (e.g. in insurance industry you want to have all types of insurances as separate topics whereas if you are running a retail store chain, one topic called “insurance” should be enough).
  4. Whole world”: There should not be a topic called other. Make sure that the categorisation system covers “whole world” (or in this case whole industry). This enables you to detect weak signals and new emerging (unexpected) topics.
  5. Multi-language: This doesn’t apply to all companies. But if your customers give feedback in multiple languages, the categorisation needs to be “mapped” across. This gives you the ability to view the analysis results in one language.

How to create or source a categorisation system

There are multiple ways to create the categorisation system, but whatever way you choose, make sure that the system takes into account both top-down (what the management wants to see) and bottom-up (what the text makes possible) approaches. Well working categorisation system requires a couple of iterations and is a balance between these two views.

  • Manual tabulation. This must be more of a top-down approach because humans can only handle about a dozen discrete categories. This method as already stated in Tip #4 is slow and it can also be expensive in higher feedback volumes. It is more suitable for BtoB companies. It is important to notice that there will not be whole world view: it will be difficult to detect weak signals. Also, trend analysis can be unreliable because of inconsistency of human tabulation.
  • Do it yourself using text analysis modelling tool. There are excellent tools like SPSS and SAS to create a model to analyse open-text. The challenge with these tools is the steep learning curve and the need to continuously tune the analysis. You need to have dedicated, well-trained professionals to take care of this work.
  • Use text analysis vendor’s industry specific categorisation system. Some text analysis vendors focused on analysing customer feedback have created productised industry specific categorisation systems. They also continuously improve the accuracy and relevancy of these systems. These systems are usually very accurate but might not fulfill your information granularity requirements. Ask for a demo, and you will see whether your needs are fulfilled.

Jeanne Bliss wrote a blog post in 2014 about the need for uniform categorisation. The benefits of systematic and uniform categorisation are obvious but yet still today very few companies categorise their customer feedback uniformly.

Designing and implementing a uniform categorisation system might seem like a daunting task but the benefits are clear. Uniformly categorised customer comments have the power to transform your organisation.


Tip #6. Design a four-layer reporting system

Organisation layers like to consume information in different ways. Executives like static reports with KPIs. Managers need a dashboard with signals about problems (or opportunities) and the ability to dig deeper to find out the root cause. Frontline employees just want to get their jobs done. For them what makes most sense is just a contextually relevant list of actual customer comments. Analysts need to dig deeper to detect weak signals, emerging trends, and do predictive analytics. That is why the reporting tools and the level of information in them need to be different for each organisational layer.

People don’t like change — minimise the (perceived) change

  1. Try to use the same reporting, visualisation and analytics tools as the other departments.
  2. If you have to introduce a new tool, implement single sign-on to make the login process easier.
  3. Make the reports and dashboards look like other reports (colors, fonts and other intranet or dashboard conventions).
  4. If you are using a CEM platform and it has an embedded verbatim analytics service, use the CEM platform’s visualisation functionality. (E.g. Qualtrics has Etuma as an embedded analytics solution. Qualtrics Vocalize can visualise Etuma verbatim analysis results.)

Here are some ideas and principles for creating the four-layer insight distribution system:

Layer 1: Executive reporting — getting strategy right

Executives require a common language. Make an extra effort in presenting the data so that it is self explanatory. For example, the categories (topics) should be named according to the industry terminology.

Top management wants short static reports comparing time periods (month or quarter). They need to understand what has changed and why, and what is being done to remedy the situation.

Executive reporting should include qualitative part. Take actual, most representative customer quotes, and use them in the report to make a trend or pattern more concrete and understandable.

At the end of the report you should include a section about the emerging trends.

Layer 2: Management dashboards — detect and improve

Managers are more self sufficient than executives and they don’t necessarily need a common language: for them the most important thing is to make their jobs easier and more valuable.

Managers are often technically savvy. They want a dynamic, close to real-time dashboard, which they can tune into their role and interests. It is crucial that they can drill into the actual customer comments for root-cause analysis.

If the comments are in a foreign language, the tool needs to have an embedded translation service or all the foreign language comments need to be translated.

Different type of managers require information tuned to their role. We are going to talk about this in more detail in Tip #10 Create role-based reports.

Layer 3: Frontline — getting things done

Frontline employees are so busy with their normal day to day work that they don’t need dashboards or reports. They want something simple and concrete that they can take a look at when they have time. It doesn’t get more concrete than actual customers comment filtered to their role and current context (e.g. location). An ideal way to this for frontline employees is a simple smartphone app.

Layer 4: Data analytics — figuring out the unknown

What is left out from the standard reporting is difficult to do. You need to come up with a system in which you can detect weak signals (Tip #18), pinpoint emerging trends and do predictive modelling.

You need to be also responsible for teaching managers to use the dashboard creatively and continuously improve the executive reporting (Don’t make too big changes there. Otherwise you break the “common language” you have managed to create.).


Tip #7. Define the customer journey

It is easy to define the customer journey from top down: you plot the touchpoints and set them in chronological or some other logical order. It is much harder to monitor touchpoint performance.

One solution is to implement touchpoint specific transactional surveys. The problem with this approach is that customers often talk about issues that have nothing to do with the touchpoint that initiated the survey: themes like brand, competition and pricing. And, what about relationship NPS or spontaneous feedback. How to categorise those?

The trick is to predefine a hierarchical touchpoint-topic categorisation system. In this system all the issues that customers talk about are automatically mapped into touchpoints.

In the airline example above the topics are on the right and the touchpoints on the left. Topic size tells us how much customers are talking about a specific topic and the colour tells the average sentiment.

Once you have created a two layer categorisation system, you can easily measure touchpoint performance, pinpoint problem areas and drill-down to the root cause (=contextually relevant topic specific sentences).


Tip #8. Examine your feedback data from the customer perspective

Managers analyse customer feedback from their own perspective. Often this view is touchpoint or function specific. Their job is to extract actionable insights that enable them to improve their own department’s performance. Your job, as a CX professional, is to analyse the customer experience as a whole.

What if, for example, one specific customer has left multiple responses to touchpoint and relationship surveys? Or they have submitted a complaint to your contact center. Or, in the worst case, are talking about your company negatively in social media. You need to be able to capture these type of events and tie them together based on the customer id.

You need monitor how different customer segments experience your service (assuming that you get a lot of feedback and have the right background variables in your customer experience database (Tip #x). For example,

  • how your most valuable customers (those who spent the most money) experience your brand across all touchpoints; and
  • what did the customers, who left your service or haven’t bought anything in a long time, talk about before churning.

Customer experience is often analysed in a vacuum. It is done within an organisational silo and almost always within the confines of one survey. Your job, as a CX professional, is to analyse the customer experience as a whole, across all the organisational units and feedback channels.


Tip #9. Use topics to create role-based reports

In Tip #6 I wrote about the four reporting dimensions and what kind of information different organisation layers require. In this post I am going to focus in the most heterogeneous and complex layer, the managers.

One of the main feedback analysis and insight distribution objectives is to provide relevant information to stakeholders. Managers want to see pertinent information that they can use in decision making. In practise this means that they want to see customer feedback analysis results and the actual customer comments filtered and visualised for their organisational role.

Our job, as a feedback analysis company, is to provide the analysis results in a format that makes extracting role-specific feedback possible. Your job, as a CX professional, is to design and implement a dashboard or reporting system that distributes these insights among the managers.

Follow these six steps and you will get a solid role-based insight distribution solution.

1. Cover all managers who are involved in delivering the customer experience

Touchpoint managers are obvious but there are usually people working in the background that are also involved in making the customer experience as good and consistent as possible.

In this graph I created a simplified model of the grocery store managerial roles.

2. Don’t forget geographical roles

It is important to consider, not only manager’s responsibilities and tasks, but also their geographical role. If you have the location information as metadata, you can, for example, filter reports for regional or country managers.

3. Get a topic list from your feedback analysis system or vendor

The first thing to do, when you start verbatim analysis, is to map the top 100 topics into the managerial roles. In order to do that, you need to get the list of topics from your verbatim analysis service provider.

Here is a topic cloud of the most frequent top 100 (grocery store chain) topics.

4. Map the topics into the roles

The next step is to create a matrix of the roles and topics. Whether you map the top 100 or all topics, it is up to you. I usually map those topics that have more than 1% of all topic mentions and leave everything else for weak signal analysis (Tip #18).

5. Implement the insight reporting system using a visualization platform

A customer insight reporting system should be implemented using CEM or 3rd party visualisation platforms like Qualtrics Vocalize, Microsoft PowerBI, Tableau, or Qlik Sense.

6. “Hard” configure the topics-to-role mapping into the visualisation platform

It is important that you configure the topics-to-role mapping into the dashboard. You cannot leave this task to the touchpoint and support function managers.

Hot topics and value driver tracking can be left to the top management (and you). Weak signal detection is definitely something only you or a data analyst can do. Everything else needs to be mapped into the organisational roles.


Tip #10. Create benchmarks using the whole data set

One of the first steps in a verbatim analytics project is to create the benchmarks. The benchmarks should be created using a large dataset covering at minimum six months of feedback data but preferably a whole year. But how to figure out what benchmarks to use and how to calculate their value?

The Topic cloud below shows all the 483 Topics in the AcmeAir example data. Colour demonstrates the average Sentiment across all sentences in which that Topic was mentioned. The size tells how much people used the words and phrases mapped to that Topic in that specific context (a word and phrase can be mapped to a different Topic in different context).

It is difficult to extract actionable insights from this dataset. That’s why you need to come up with benchmarks that set a perspective for the verbatim analytics process.

I also like to include the feedback channel metadata/statistics as part of the benchmarks table.

The chart above shows:

  1. Number of Signals (=verbatims) in the dataset (1247).
  2. Number of Topic mentions are in those 1247 Signals (3439). In other words customers talked about on average 2,75 (3439/1247) Topics in their comments.
  3. Number of Topics customers talked about (483).
  4. Number of Hot Topics (16). Each one of them had a minimum of 34 mentions (1% of all Topic mentions).
  5. Average Sentiment across all topics. This is the only “hard” measure with a value between -1 and 1.

I know that some of you will laugh at the last sentence. Sentiment is considered a soft metric. Both in its meaning and especially in its accuracy. But, whereas a Topic, which is a relative metric having value only in relation to other Topics in the same dataset, sentiment has an absolute value between -1 and 1. When it comes to accuracy, we have a lot of empirical evidence that sentiment analysis works.

Once you have this kind of benchmarks in place you can prioritise your analysis process on the most urgent issues. In this case the first step would be to look at the Hot Topics, which sentiment is under -0,18.


Tip #11. Don’t focus too much on the aggregated analysis results

Whole datasets clouds look good but tell very little about customer behaviour. Topic clouds are more valuable, because they “pool” words and phrases into industry specific contextual “baskets”. But Topic clouds have limited analytical value.

Topic sentiment clouds are starting to be a bit more interesting. But having nothing to compare them to, their value in extracting actionable insights is also limited.

Because a Topic doesn’t have an absolute benchmark, you need to create ways to compare the analysis results over time or in relation to background variables.

For example, what are women talking about compared to men, what kind of issues are customers talking in the UK market compared to the average of all markets?


Tip #12. Define hot topics

We have spent close to a decade trying to figure out what is the appropriate level of detail in a feedback categorisation system or scheme. The challenge is finding the right balance between significance and granularity.

How to figure out what are the important, hot topics from this list of 483 Topics?

In Etuma feedback categorisation service all relevant words and phrases are contextually mapped into hundreds of topics. Contextually means that a word or a phrase can be mapped into multiple topics depending on semantics (meaning).

Some of the topics are “hard” like PRICING, STORE LAYOUT and some are “soft” EASE, CONVENIENCE, QUALITY. In other words, topics are not created equal. Soft topics should be used in conjunction with hard topics. They can also be used to filter the whole dataset (e.g. what hard topics customers that mention EASE are talking about?).

Definition of a hot topic

I do the hot topic definition manually. First I create a list of topics that get the most mentions. Then I go through them one by one, remove “soft” topics and make sure that the topic context is correct by checking the top ten words and phrases mapped to a hot topic.

You need to ensure that the topics are defined correctly. Here are the top 10 words and phrases that are mapped to a topic called SEATS. In this case you might want to create a separate topic for TOILET SEAT or if not relevant exclude it from the analysis.

At the end of this process, you should have anywhere between 10 and 25 hot topics. In the AcmeAir example I was able to identify 16 hot topics.

The distinction between hot and other topics should be done in the database. But this can cause a problem because your business changes all the time. What is this month a weak signal could, in six months, become a hot topic. Database changes are often too slow and difficult to make (unless you or somebody in your team have a direct access and the right to make the changes). That’s why I like to define the hot topics within the visualisation and reporting platform. This approach requires that you distribute verbatim analysis results via a centralised customer experience management team. If not, the hot topics distinction needs to be done in the CX database.

Besides defining the hot topics, I also pool topics by the customer journey touchpoints and organisational roles. This enables you to easily share relevant and actionable information with the customer experience stakeholders.


Tip #13 Use topic volume to define the relative importance of a topic

Verbatim analysis results don’t have an absolute benchmark except the average sentiment. Topic (volume) is only interesting when you:

  1. compare a topic to itself over time (e.g. people are talking more about PRICING this month);
  2. use background variable as a filter to compare what topics are talked about (e.g. what are women talking about compared to men);
  3. use a topic to filter the main operational dimension (e.g. compare all stores: how much customers are talking about TIDINESS and what is the sentiment for each store’s TIDINESS. This enables you to find out the stores that have a problem with TIDINESS); and
  4. compare the volume of one topic to the overall topic volume over a certain time period (identifying you key loyalty drivers and hot topics.

The comparison between one topic mentions (volume) to all the topic mentions tells you the relative importance of that area of your operations. It is important to notice that I said relative. First of all your your feedback sample might be skewed to seasonal or survey process related issues. It might not be representative across all customer segments and behaviours.

If your feedback system is carefully designed: you actively solicit feedback through many channels and touchpoints, crawl social media, and analyse contact and support center feedback, then this type of analysis will give you a strong signal of what is happening in your customer base.

The graph above demonstrates how much customers are talking about a specific hot topic as s share of all the comments in which hot topics are talked about.

You can also use this same visualisation for pooled topics. What I mean with this is that you can pool the topics based on e.g. customer journey touchpoint or organisational role.

There are many ways to visualise verbatim analysis results. The important thing is to make the analysis results relevant to the report viewer. For the management team the touchpoint specific information might be sufficient. In lower levels of the organisation the analysis results need to be more granular.


Tip #14. Monitor hot topic level volume and sentiment

Hot topics (Tip #12) are the most important customer experience attributes (or at least the most talked about attributes). They need to be tracked in more detail than other topics. In practise this means that you need to develop a dedicated topic-sentiment monitoring visualisation to track the hot topics.

There is just one rule here: when the sentiment is going down and volume up, you have a problem.


Tip #15. Use stacked charts to hide unintended volume volatility

Topic volume is a volatile measure. Some of the volatility is real (people are mentioning the topic more or less often in their comments) and some of it is unintended (the overall feedback volume varies due to feedback process or seasonal reasons). You need to figure out how to minimise the effect of unintended volatility in your feedback process.

Here is an example of the survey volume volatility. It is difficult to tell the volume of different topics from this chart and the reason for the volume volatility could be that during that month you just sent out fewer surveys.

The graphic above is using absolute volumes. The jump in volume in this example is caused solely by the survey process. The reason is that in certain weeks more surveys are sent out. You can see that customers are talking more about CHECKOUTS but it is difficult to see the relative importance of this.

Stacked charts hide the survey process and seasonal effects. Once you use stacked charts, it is a lot easier to see how different topics are performing.

This stacked chart demonstrates the relative share of topic mentions each week.

It is impossible to receive the same number of feedbacks every day, week or month. What is important is to figure out the relative share of the topics customers or employees are talking about. Stacked charts makes this easy to do.


Tip #16. Use a topic-sentiment heatmap for extracting actionable insight

I have tested many different visualisations and graphs for detecting patterns from open-ended comments during the past six years. What I have learned is that a simple topic-sentiment heatmap works well.

This approach alone isn’t sufficient for comprehensive analysis because it leaves out the time trend.

As said earlier in tip #12, aggregated analysis results don’t tell that much. You need to figure out ways to analyse the topic in relation to itself over time or to the background variables.

The most important thing is to measure how your primary operational dimension (store, customer segment, product (group), business unit, geographical area) is performing.

The topic-primary-operational-dimension analysis requires two heatmaps. The top one is your primary operational dimension (e.g. store) and the bottom one is the topic-sentiment heatmap.

Using two heatmaps (topic-volume-sentiment and store-volume-sentiment) as a filter to drill down to contextually relevant customer comments.

Two topic-sentiment heatmaps will enable you to find out how different stores (products, business units, countries, market segments…) are performing, what kind of problems different stores have, and why are some stores performing better than average (identify best practices).


Tip #17. Use topic, sentiment and background variable to drill-down to the root cause

You see the signal in your dashboard but have no idea what is causing it. Knowing that something is wrong is useless unless you know why or what is causing that problem. That’s why feedback analytics is divided into two distinct phases:

  1. Detecting that something is wrong (or right); and
  2. Finding the reason for that problem (or opportunity).

The process of finding this reason is called root-cause analysis (LINK).

Let’s say that you notice from the dashboard that the STORE LAYOUT topic volume starts increasing and sentiment becomes more negative. In this case you know that something is wrong but you have no idea what. Once you drill down to the actual customer comments, you notice that the problem is with stacking carts that employees leave unattended on the aisles.

Drilling down to the root cause by using a topic and NPS detractors as filters.

You need to develop visualisations that can detect that something has changed. Once you notice that something has changed, you need to be able to contextually drill-down from that signal all the way to the actual customer comments. Often reading five to ten comments gives you a good idea about what is causing the problem.


Tip #18. Use two heatmaps to benchmark support or sales agent performance

We’ve met with dozens of contact center managers during the past six years. They all complain about the same issue: It is difficult to detect systemic issues and benchmark qualitative agent performance. This happens because of the ineffectiveness of their manual categorisation process and the focus on support or sales transaction statistics (first contact resolution, number of processed web forms, sale completion rate etc.) rather than the qualitative agent performance.

The reason that NPS or CES comment, email, web form or chat log categorisation is so ineffective and inconsistent is that it still often relies on manual tabulation. There are hundreds of thousands of people around the world reading customer complaints and trying to make sense of the open-ended comments. In other industries this type of tasks have been rendered to machines ages ago. Isn’t it time to do that in contact centres also?

Once you automatically and consistently categorise all support and sales transaction surveys (using the CES format), incoming emails, social media complaints and webforms, you can start comparing the analysis results to the agent performance.

The upper heatmap includes the support agents and the lower topics. You can filter in either direction. Choose an agent or a cluster of agents with similar behavior and find out what kind of issues they are struggling with. Or choose a topic and see what agents are performing well (or not so well) when it comes to that topic. If you have CES or NPS score available in your post transaction surveys, you can use those in stead of the sentiment.

Detecting systemic issues (i.e., finding clusters of problems) would enable companies to take corrective action more quickly and reduce the number of claims they need to process related to a specific issue. Being able to benchmark agents, identify best practices and this way continuously improve the contact center performance will reduce the number of incoming ticket and improve the agent behaviour. And this, of course, in the long term leads to happier and more loyal customers.


Tip #19. How to use the average sentiment to prioritise improvement efforts

Whatever categorisation service you decide to use, it should include a topic level sentiment analysis. This means that each topic mention gets scored with negative, neutral or positive sentiment.

“The checkout line was really long but the person at the checkout was helpful and friendly.” -> CHECKOUT -1, SERVICE ATTITUDE +1.

In the AcmeRetail example the zero point of Hot Topics is 0,221. Everything to the left of this point are areas of improvement. The overall topic volume (relative topic importance) is on the y-axis. Topics on the upper left corner require your urgent focus.

In topic sentiment analysis the average topic sentiment is between -1 (all topic mentions were negative) and 1 (all topic mentions were positive). This means that the zero (sentiment) point is often unusable in feedback analytics. What is more interesting is the average sentiment (of all topic mentions).


Tip #20. Use negative sentiment to detect problems

The headline tells it all. Being able to use the negative sentiment or NPS score to drill down to the customer comments is a powerful way to find actionable information. Your analysis results become even more relevant and valuable when you filter by topic specific sentiment or NPS score.

More about NPS score and sentiment correlation in tip #25.

Being able to drill down to the negative customer (or employee) comments contextually gives you a powerful way to extract actionable information from open-ended customer comments.


Tip #21. Use positive sentiment to identify best practices

Being able to use the positive sentiment or NPS score to drill down to the customer comments is a powerful way to identify best practices. The analysis results become even more relevant and valuable when you filter by topic level sentiment or NPS score.

Being able to drill down to the positive customer (or employee) comments contextually gives you a powerful way to identify best practices that you can then share with rest of the organisation.


Tip #22. Identify customers who have written many verbatims

Identifying customers (or employees) who write many open-ended comments is important. There are multiple reasons for this:

  1. This type of customers can distort the analysis results (the one who yells the loudest is heard the best); and
  2. They are often the people whose voice spills over to the social media. If you don’t address their concerns, they will ‘talk’ about you somewhere else.
This graph lists the people who tweet about the UK banks the most.

It is important to identify those customers who ‘talk’ with or about you the most. They can be your most loyal customers, but more often not. They are people, who like to spread their unhappiness to a wide base of friends and acquaintances. Social media makes this sharing easy. You need to identify them, communicate with them, and remedy their concern fast and as well as you can.


Tip #23. Identify customers who have written long verbatims

Customers who write long open-ended comments are usually less happy with your service. They can be also your most loyal customers giving you insightful feedback (LINK). That’s why you need to identify customers who write long open-ended comments.

The dots in this heatmap are B to B customers. The x-axis reflects the average sentiment of all the topics the customer is talking about. The y-axis demonstrates how many topics the customer is talking about in their comment. I have filtered here all the customers who talked about more than 10 topics in their open-ended feedback (I didn’t know the formula for counting characters so I used the number of topics instead).

Using this method, it was easy to demonstrate that the cluster of unhappy customers on the top left corner of this heatmap were more negative about the quality of this company’s customer support. By being able to identify the customers and further drill-down to their concerns, this company was able to focus their customer support center improvement efforts.

Doing this can bring additional focus on your detractor call-back process: if you are doing call back on an unhappy customer (e.g detractors) but need to limit the volume even further, this simple method of identifying long responses will make that possible.

Knowing who writes long comments is another data point that can be used in feedback analytics.


Tip #24. How to use topic level sentiment to find out what drives your NPS score

NPS system is popular. It is a simple and (maybe because of that) well liked especially in the top management. But the NPS score alone is just a signal: it doesn’t tell you what you should do to improve the score. The reason for the NPS score change is hiding within the open-ended comments.

One thing to keep in mind is the correlation between NPS score and topic when people mention more than one topic in their comments. If people mention just one topic, the NPS score and sentiment correlate very well. When customers mention multiple topics sentiment is a better reflector of how the customer feels.

The correlation between NPS score and topic is not often accurate because customers talk about many topics and one or few of them drive the score they gave. That is why topic level sentiment is a better indicator of their true feelings. I have no doubt that our sentiment analysis is accurate with high volume, high quality feedback (such as NPS system). Use it to extract actionable insight from your NPS system.


Customer experience manifests itself in open-ended customer comments. You need to make sense of what customers are talking about accurately and fast.

You can find more information about multi-language feedback analysis here.

Matti Airas

Written by

Customer experience analytics expert

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade