It’s true that many services offer natural language processing, and in many cases responses are pulled from a pool of responses. Init.ai uses retrieval based model because we want to ensure businesses have absolute control over what responses are sent to their users. This may change over time, but for now it’s the right call for us. The state of “generative” models is not where we want it to be for our purposes, and is often not as reliable (See: Microsoft Tay). The question of whether or not it should sound like a bot though is completely up to the designer, or product person. I am firmly in the camp of not trying to trick people into thinking they’re speaking to a human. I don’t see a benefit in that, and generally, advise against it, unless there is a real business advantage in doing so.
And when these services err, which they will, we will need graceful patterns for handling those cases. In the same way that we’ve handled server errors in websites for years. It’s a design problem. Do you stop using a service that 500s for the first time?
I think the disconnect here is that conversational apps / bots don’t need to pass the turing test. There’s an interesting quote in this article, Think Differently When Building Bots, where the author talks about being smart by playing dumb. Basically saying we should own the state of the technology and If you try and fight it, you’re likely to mess it up. In the long run this will become less true, but for now it’s great advice and works very well with the current bot frameworks.
This space is still young, and we’re just scratching the surface. Init.ai, for example, is trying to solve some of the more nuanced parts of conversation including context switching, segues, random utterances, among other things. These all add together to help push the technology further and give people the necessary tools to build better end user experiences.