CUI: Conversational User Interfaces
The One API to Rule Them All
Update: I’ve started using another acronym as well, RCUI, or Rich-media Conversational User Interfaces. An RCUI uses voice but also includes a screen usually with web technology and can render anything visually as well as audibly.
After reading all about the predictions from Gartner and others about “virtual agents” and all their many names (text bots, chat bots, bots, virtual assistants, etc.) it is clear we need yet another acronym.
Imma use CUI from now on (to go with GUI and CLI). I bet others out there are using, though I have never heard of anyone officially using it. It really needs a Wikipedia page, maybe someday.
I like CUI because it covers the other colloquialism, chat user interfaces.
I’ll save the justification for CUI development knowledge for later as well as my take on the best domain model and pattern for such things. Suffice it to say every CUI implies at least some level of AI with the following specific concepts:
- Responses that are spoken and parse regular expressions
- Parts that cover the current context of the conversation, story, or interaction
- Actions that can be done often in the background asynchronously and reported on later when asked again
One thing is perfectly clear, the future of all interactions will be humans and virtual assistants/agents conversing with each other entirely in written, natural language. When structured data is available, it can be passed with JSON of course. Otherwise, everything is simply going to be talking to each other. Imagine the Google Duplex demo where not only is the assistant an AI, but also the person taking the reservation on the other end, an equally mundane task that could easily be automated through a conversational interface as well.
