Zero Discovery - Why Bot Developers Might Stay Away From Voice Platforms
Tam
301

There are a few things that I would like to comment on:

For Actions on Google you never had to enable any actions to use them. Alexa now allows you to use skills without having to enable them. http://fortune.com/2017/04/04/amazon-change-easier-alexa-skill/

On Google home, you can say things like “I want to play a game” and Google will suggest voice apps for you. For my Voice Tic Tac Toe application I made “play tic tac toe” and “play a game” to be used to discover Voice Tic Tac Toe. I can see how many people used these commands by looking at the discovery tab under Analytics in the Actions on Google console.
https://www.youtube.com/watch?v=By972_gh9DY&list=PL9g5RfWLvLrZd2BhhTj0J4s_iC9j8kgwd&index=7&t=14s

“Users have to say commands in just the right way.” This is not the fault of the device, but the fault of the developer. In my Voice Metronome application, I have “slow down”, “decrease”, “lower” and more as synonyms for “slower.” Also, I use API.AI which has natural language understanding built-in. I would assume that on Actions on Google the experience would be better because the developer can see what the users are saying.
https://www.voicebot.ai/2017/07/18/why-you-want-amazon-to-provide-alexa-utterance-data-to-developers/

Note: I work at Google, but not on the Assistant team. Also, these opinions are my own.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.