IAK17 / Conversational Interfaces

This years IAK17 conference focused on UX for Smart Services. For two full days a series of talks discussed current and future challenges for human-centred product development in an industry that tends towards application of artificial intelligence wherever possible.

Talks touched on innovation, empathic services, privacy and industry use cases. This post comments on talks about conversational interfaces. A previous post was dedicated to Innovation and design.

Chatbots are talked about a lot…

There has been a lot of talk about and experiments with conversational interfaces. A conversational interface — as opposed to graphical interfaces — relies on speech or chat to communicate with the user, this can be in form of a chatbot or speech recognition system. Its not a new idea, Interactive Voice Response (IVR) as the simplest form to interact via speech have been in existence for quite a while.

Skyscanner on Skype

More sophisticated language recognition and AI enable a richer conversation. Siri, Google, fun collaborative apps like Google Allo, Ponchobot or tools like Visabot and Skyscanner on Skype are certainly just the beginning.

However they might not replace but rather complement beautifully designed graphical interfaces as an additional channel for a service to present itself.

Challenges

In his talk “Conversational Interfaces” Tim Friedmann (MediaWorx) compared the interaction with chatbots to command lines, GUIs or touchscreens.

While interaction with chatbots might feel the most natural, it comes with a challenge. Graphical interfaces provide visual cues in form of icons or text. Conversational interfaces often present a lack of cues about available options. The user has ambiguous expectations about what she can ask the chatbot. The concept of “Erwartungskonformitaet” (as part of ISO 9241–10) describes the fact that interfaces need to be consistent and conform with experience and expectations of a user in order to be comprehensible. This is a challenge for current chatbots. In Human Computer Interaction this is also described as the “gulf of evaluation”, the difficulty for a user to evaluate the state of a system and available options. In human communication we use all kinds of cues, such as facial expression, posture etc. to bridge that gulf.

Gulfs of evaluation & execution, The Design of Everyday Things (Norman 1988)

Unfortunately the above is complemented with (expected) unlimited options (degrees of freedom) to enter a dialogue with chatbots. In contrast to that expectation simple chatbots (e.g. Alexa) need the user to recall commands currently — which obviously conflicts with the expectation of the user to just engage in a conversation. And we do not even talk about irony or semantics here.

Command vs Dialogue

Current chatbots fail having rich conversations. They do not understand statement that refer to previous questions, yet. However when they will move on from a dump for spoken commands to a system that is able to actually ask back questions things will become really interesting.

Displaying what the machine “can see”

New mental models will be needed for such interaction as it might feel awkward and unexpected for us to get asked questions by a machine. But it might for example allow for much better research and knowledge
management tools. Imagine being able to find documents by roughly describing their content — or telling the machine in which direction to carry on a search in combination with a visual representation of results.

In his talk “UX for KI” Jan Korsanke (SinnerSchrader) emphasised the need to explain their understanding of our context — for example cars that represent what they are “seeing”. The same idea holds for conversational interfaces. It will be an interesting challenge to represent the “understanding” a bot has about a conversation.

UX for Chatbots

Given that technical challenges for chatbots might be solved, they provide a whole new playground for experience design.

Sandra Griffel (Denkwerk) explained how aspects of flow and cadenza gain importance. There are different opinions on whether chatbots should have names and a “personality”. Ignorant of that, it is important which prompts chatbots state — and when they do so. Poor design might overload the user with too much, or the wrong information at the wrong time. Trust in a tool certainly is established throughout such a dialogue and the use of the right language and terminology.

Denkwerk included a content writer within the product team, in order to produce the right content and style of language. Even if you think about a bot that might be trained on a corpus of existing literature or movie scripts, it might still be very important to choose an appropriate selection to define the style of language a bot is going to use.

Well-known methods from information architecture still hold for conversational interfaces. It is important to define a domain model to describe which concepts a bot understands — of course this can be linked to much broader but less specific sets such as WordNet or Word2Vec.

The dynamic nature of interactive conversations might require a re-invention of user journeys and flows. It still requires different stages, entry points and decision points. Carefully scripted basic chatbots can certainly be described in detailed decision trees — not too unsimilar to the old IVR. For more dynamic bots this might lead to more loosely connected scripts — similar to scripted sequences in computer games. Games probably will be the first place to explore more advanced chatbots

The Mechanical Turk — illustrating a Wizard of Oz setup

The easiest way for early testing might be a Wizard-of-Oz approach. Having outlined the digital product, the teams invited potential users into their lab and had them “speak” to a “chatbot” (a colleague sitting in the next room, prepared with an appropriate script, user journey and language samples). This allowed to understand typical questions and satisfaction with answers provided in a given scenario.

What are your thoughts on conversational interfaces?

--

--

Johannes Schleith
Notes from UX/Content Strategy/Content Marketing Conferences

Senior Product Manager at Thomson Reuters. Passionate about User-centered Innovation, User Experience and Design Thinking and Human Centred AI