Great thoughts in this article, but why only possible with bots? The above interaction can be easily done with going through two/three cards and pressing a button each time, and it will feel more instantaneous. Chat app > Scan > Two beers (button) > Bud (button) > **** 0345 (card on file) > Done.
Typing/reading what the bot says while the stadium erupts in cheers and everyone is pushing everyone? Or ordering something “to go” while you’re driving by type, instead of just pressing three buttons? We could make these bots work with speech recognition, but I think we all know how well that works. “Hey Siri… Oh forget it, might as well do it myself.”
Thing is, the above issues you described are simply issues of bad app design, not interactions we couldn’t have done in any other meaningful way. The same issues will persist with bots, if they’re badly implemented (quite a few companies will always resort to home-made solutions, instead of using third-party APIs or services). The bot will order a bag of Cheetos for the guy on your left, instead of an ice-cold Bud for you. Or it won’t understand what you said, and go “I’m sorry, I didn’t…”
Improving speech, facial recognition libraries, along with improving APIs and code, will definitely bring on the era of AI, which looks like it’s starting with bots. But Apps don’t cost millions these days (I really hope we’re through that era, even though I’m sure there are some companies still charging an arm and a leg for these), and a bot will still have to be implemented in one, for the “brand” to feel authentic and “tailored to the customer”.
Other than that, thanks for the read!