A new multimodal conversation language

Iskander Smit
Target_is_new
Published in
2 min readJan 24, 2016

In this interesting overview on the move to conversational commerce by Chris Messina he touches also an interesting aspect on the new conversational language that we will have with our services based on task-based command lines. The example how to type in commands in Slack conversations with the slash and how the new app Peach is using a sub-language to communicate all kind of special short-cut messaging. Agree with Chris that we will learn in this first phase towards the conversational interactions to have these kind of dialogues.

In my trends-for-2016 post I shared the expectation that we will start to get used to have a dialogue with the products we use, and have more tangible interactions at the same time. Let’s elaborate a bit on that. These conversations will be a possible format to make an interoperability between the different services and products we use. We had the short hype of intelligent agents at the end of the last century, it was to early then and technical not ready missing the big data for instance. It will happen now. And the special behaviour will be the connecting of the different services. That goes two ways.

We will have the different services connected thru our own conversations, but we will also enhance our interactions with the services with more than one channel. Multimodal combining screen interactions with speaking, chats and physical experiences. As Chris mentioned the benefit of the knowledge of a person in a chatroom to verify payments, so is adding the physical contact points with the pure digital ones. That is the context for the learning dialogue.

A really interesting development. With a challenge to design these conversations using rule-based principles and machine learning support, applying it to both digital and the physical interactions. With an open setup for people to create their own language.

--

--