IMO the reason why Siri and the Google equivalent dont seem to have fully taken off yet is twofold. Firstly we feel slightly embarrassed speaking on the phone in public (hence why conversational apps are on the rise), let alone speaking to a computer in public. When I think about my day and the amount of time that I would feel comfortable speaking to a machine, it is probably an incredibly small window. From my morning commute, to working environments, the majority of my day is spent around a lot of people. And if we all start speaking to our phones, it would be very odd and confusing “Excuse me…are you talking to me?” “No, sorry, Siri”. I think that this is the reason why the amazon echo actually has a chance. It makes sense in the context of the home and the conversations that we have amongst our family.
The other reason why it doesn't work is that it is incredibly hard to understand the features and capabilities of these AI personal assistants. If you are highly digitally literate, then you might have an idea, that if I say the right thing, in specifically this order, my phone will order me a pizza. But to be honest I am not sure I would trust it to not mess up my order, or to deliver to my work address not my billing address without a GUI to check the details.
This is a fantastic opportunity for AI, and I am excited about what is going to happen next. But I feel like we need to be aware of the realistic hurdles to overcome, especially in speech UI’s. Love your articles btw Chris, big fan!!