Designing the future of Call Centers

Aveejeet Palit
Moonraft Musings
Published in
5 min readJun 29, 2016

With advancements in design and technology, what would be the future of Call Centers? Would they change or become irrelevant entirely?

Nothing irritates more than annoying malware/adware which keep generating popups on our screens one after the other. Within a matter of seconds, the screen gets filled with more than 10–15 popups leaving us banging our heads against the table! Urgh!

Unfortunately most software tools are similar to these malwares. They throw up a lot of data on screen … some relevant, some irrelevant and some totally damaging (wrong information which can lead to major errors!). For many years now, companies have been trying to address the problem of how to classify, detect and display relevant data which can be consumed easily by the user.

This becomes incredibly apparent when we study the working situation of a Call Center executive. Call Center executives need to achieve the optimum balance between achieving efficiency while ensuring customer satisfaction during their calls. They are measured on the number of calls completed in a day and also the number of positive feedback received.

Their challenge in achieving these targets is fetching the data relevant to the call from the deluge of information thrust on them, while being engaged in the conversation with the caller.

Almost 75% of the time on a call is spent on finding relevant information!

They have to sift through multiple windows, type in numerous queries, note down requested information and much more … while the customer is probably screaming his lungs out on the other end of the call!

How could design innovation make the call centre executive more efficient in getting the right information relevant to the call? Is it only a UI design problem of reducing the large number of screens and navigation across applications into one screen real-estate? Is that even right approach, assuming it was feasible to squeeze in so much information and still be usable?

A simple UI redesign exercise soon proved to be futile given the sheer volume of information from numerous applications and screens. It was clear that a different approach was needed.

The inspiration was realising that Search engines have solved the problem of bringing the most relevant information to the user across petabytes of information strewn across the internet for a long time.

What if we applied the same design principles of a search engine query flow to address the problem at hand?

Instead of navigating across screens for information, the user needed to replicate the same behavior he has been using all his life to find stuff on the internet … Search!

Using this insight as a driving principle lead to addressing 2 of the key challenges in solving the problem:

Challenge #1 — Data existed across multitude of screens from different applications operating in silos. The executives had developed coping behaviours such as tabbing across all the applications, only to keep the sessions alive! They have personal notepads to cut-and-paste information that they think may become useful in the call from across the screens while negotiating a large amount of largely irrelevant information for the call at hand.

A Search driven approach shifted the practice of manually fetching data across different applications to the machine’s intelligence of presenting relevant data needed for the call. As the context of call would already be known during the opening introduction of “How can I help you today?”, the executive can enter the keywords they know relate to the context of the call into the Search bar. The system can fetch the relevant information from the keywords entered while the executive was going through the motions of confirming identity of the caller. This enables maximum efficiency of data access with minimum shifting across applications and their screens.

Challenge #2 — Multiple layers and types of data being displayed in different windows make it extremely difficult to comprehend and identify the information relevant to the caller. Studying the patterns of the call logs and correlating to the kind of information needed presented a possibility of clustering the information much more efficiently for the executive.

The information architecture was completely rethought by focussing on 3 simple ideas. (i) We found out that 80% of the queries could be addressed by grouping the data into widgets (ii) The same widgets could address different queries if the information is clustered intelligently (iii) Widgets can then be tagged to identify the nature of information it contains. Cross-linking the tags with keywords in the Search bar would bring up the necessary widgets needed for a call.

Now…all that the call center representative needed to do was to type the keyword in the Search Bar, while doing their opening conversation with the caller …and all the relevant information will be there!

Leveraging on the power of search in this manner helped improve efficiency by almost 25% (reducing average call time from an industry average of 280 sec to 210 sec per call)

This would have been difficult, if not impossible to achieve, using a traditional UI redesign approach across such a large set of information spread across so many applications. Reducing the stress of the Call Center executive in merely finding and copying the right information created the possibility of a cross-sell by delivering a better customer interaction.

The current trends in technology and associated design interventions can help in completely rethinking problems we face today. A Search driven approach for the Call Center executive will quickly create a huge training set for a Machine Learning engine to refine the information presented to the Call Center executive. Confidence built in this manner can in the future open-up the possibility of allowing the Search bar being made available directly to the Caller in their phones! Leveraging speech recognition technologies could easily make it possible for the caller to speak rather type their query. Many callers would therefore satisfy their need directly by speaking into their phones, thereby reducing the number of calls to be serviced by a Call Center. While Call Center executives might still need to support this mechanism for some more time, the machine will learn from each interaction reducing the need for human intervention eventually.

All the above are clearly in the realm of possibility given the rapid development of Machine Learning and speech technologies. The interesting question this raises is “what new value could the the traditional Call Centers of today provide in a scenario where Machine Learning tools automate a significant percentage of requests handled by the executives today?”

PS. Thanks to Nivedita Kamat for her contribution to this writeup

--

--