Where does conversational UI leave design?
Stelios Constantinides
4159

During our process, a surprising finding was that personality matters. The Slack API is fast, meaning Lunchy replied almost as soon as you finished typing.
Part of the magic was lost. Lunchy felt like a robot, not a helpful member of the team who happened to love lunch.
The fix? Simulate that Lunchy was “typing” and delay his response by a mere second.”

It’s both ironic and fascinating how *simulating fallibility* does a better job of convincing us of a system’s life-likeness than intelligence does, and much of it boils down to a person’s ability to relate to the system.

There is a fairly long history of research on this topic, and its staring to show up commercially. Check out for example Cynthia Breazeal’s work on social robots, and how this is filtering down into her market research on early prototypes of the Jibo home robot. Jibo is ‘childlike’ in its appearance and speech rhythms, and deliberately programmed to make ‘endearing’ mistakes in grammar etc, because doing so makes the robot more relatable. Softbank’s Pepper robot is similarly designed as child-like and fallible.

In the context of relate-ability, character/personality design is an integral part of Conversational UI.

Neat stuff!