There are a few things that I would like to comment on:
For Actions on Google you never had to enable any actions to use them. Alexa now allows you to use skills without having to enable them. http://fortune.com/2017/04/04/amazon-change-easier-alexa-skill/
On Google home, you can say things like “I want to play a game” and Google will suggest voice apps for you. For my Voice Tic Tac Toe application I made “play tic tac toe” and “play a game” to be used to discover Voice Tic Tac Toe. I can see how many people used these commands by looking at the discovery tab under Analytics in the Actions on Google console.
“Users have to say commands in just the right way.” This is not the fault of the device, but the fault of the developer. In my Voice Metronome application, I have “slow down”, “decrease”, “lower” and more as synonyms for “slower.” Also, I use API.AI which has natural language understanding built-in. I would assume that on Actions on Google the experience would be better because the developer can see what the users are saying.
Note: I work at Google, but not on the Assistant team. Also, these opinions are my own.