Customers Don’t Care Why— Part One

Rasmus Aveskogh
The Tele2 Technology Blog
6 min readOct 15, 2018

by Rasmus Aveskogh — Chief Architect

In 2014 Com Hem started a journey with a very clear goal: Sweden’s most satisfied customers within our business segment. We rolled out a broad Customer Experience (CX) initiative that spawned several projects and had many different parts of our organisation involved. This is the first article in a series where I will take you through an amazing journey we’ve made within one key area of this initiative, namely how we started measuring quality of service (QoS) using big data approaches and how that turned into a foundation for a new customer centric way of working throughout the organisation. Quality of service is one of the most fundamental and important aspects of customer experience and it was obvious to us that if we wanted Sweden’s most satisfied customers, quality of service had to be world class.

In this first article I give some background on how Com Hem used to work with Quality of Service (QoS) assessment and what made us realise that we needed change. In upcoming articles I will cover, among other things, the technology behind this, how this has affected our way of working, and what the results are.

Performance indicators

As with anything you want to assess your performance in, and potentially improve, you want to measure it, i.e. you want to quantify aspects of it and assess what level they’re at. If you’re working on improving your stamina by running three times a week you might be interested in how you develop by timing your runs, calculate your average pace, tracking your pulse etc. In business terms these are called performance indicators, where the most important ones are usually referred to as key performance indicators, or just KPIs.

A natural place to start when we began looking at quality of service within the customer experience initiative was therefore our performance indicators related to QoS, and initially not so much the actual figures themselves, but rather what KPIs we had and how these were defined. In short, we wanted to make sure that they covered all aspects of QoS as relates to customer experience. As it turned out, there were many metrics and performance indicators more or less related to QoS, so we had to determine which key areas to focus on first. We wanted this to be a data driven decision (rather than one based on gut feeling) so we produced several data models where we correlated indicators of impacted customer experience (such as calls to customer service, customer trouble tickets, out-of-the-ordinary speed tests) with metrics that we had. I won’t (and can’t really) share all of the correlations we made, but below are some examples. The data backing these is long gone due to regulatory reasons, so these images are actually screen shots made back in 2014.

The example above illustrates how we established correlations between network quality issues (purple heatmap) and customer trouble tickets (red triangles). The cyan dots are all customers. The actual processing was not performed in this visual manner, but visualisations like these are an important part in understanding the underlying patterns you’re looking for.

Above we see an example where we took the model a bit further and established a relationship between customers performing speed tests (yellow heatmap), such as Bredbandskollen, and their likeliness to file customer trouble tickets with us. Using control groups we could see clear correlations between customers being affected by network quality issues, which affects network throughput, and the rate at which they filed tickets with us.

Using models like these we soon determined that the central performance indicators were the ones assigned to Network outages, Network Capacity and Network Quality. This does not mean that we discarded the others, it only meant that we had to start somewhere, and these were the ones we concluded were most relevant for quality of service, and therefore customer experience.

I will go into detail on these performance indicators in later articles, but first I want to expand on a revelation we did quite early in our investigations: No-one had the full picture!

The customer doesn’t care why

At the heart of providing a high level of quality of service to our customers lies an often forgotten mantra, “the customer doesn’t care why”. This short statement tries to encapsulate the fact that our customers “measure” quality of service on a very simple scale. For example, the scale for broadband could look something like this:

If our network suffers from congestion in one segment one evening and another segment another evening, the customer cares little over these exact whereabouts — if the broadband is slow, the broadband is slow! Likewise, if our cable is cut one day (may not even be our fault), and we suffer from an RF related issue impacting the network another day, the customer will simply conclude that the service was out of order — regardless of why. In some cases we may be cut some slack if the responsibility of a third party (such as with the cable cut) is communicated effectively, but it’s typically not something we can rely on.

All this makes it essential to assess quality of service from the customers point of view, regardless of root-cause. Our revelation from above was: we weren’t. Like most service providers we had fairly robust proactive network maintenance processes revolving around capacity planning and network quality assessment. We also had a network operations center (NOC) highly skilled in locating and solving sudden incidents. The problem was that these processes, and the business in general, lacked overall insight into how accumulated quality of service (and its associated customer experience) developed over time. Accumulated in this context really means two things — accumulated from all things affecting quality of service, but also accumulated over time. Let me illustrate this:

Say that the timeline below represents what a particular customer experiences over a (very bad) period of time:

Meanwhile, our people with capacity saw this:

.. while our people working with network quality issues saw this:

.. while our people working with TV production saw this:

.. our NOC, given the 360 view that they have, did see both capacity and RF related issues:

“What’s the problem?”, you might ask. We caught everything! Well, yes, and no. Everyone saw something, but no-one saw everything. Is it important that all processes see “everything”? No, but all these processes and their associated departments work with limited resources, and they need to prioritise. That means prioritising one congested network segment over another, or addressing one network quality issues over another. To become truly customer centric and make sure customer experience is our highest priority these decisions must be made with the full picture at hand — including issues that traditionally is related to other domains, processes and departments. To put it very bluntly: customer “blood pressure” becomes a dominant metric to prioritise on.

In the upcoming articles I will go into detail on how we addressed this — i.e. how we started measuring what the customer experience is — in real time, and how we came up with concepts such as customer experience accounting.

You can get a teaser in the following videos (English and Swedish):

--

--