We’ll probably soon see the end of voice menus — maybe in the next year or two. I just went through a Bluetooth pairing setup in a car and using voice was much simpler as an input method than having to navigate menus on a pixel based display. However, it’s still a primitive way of interacting with voice.
One of the largest benefits of natural language understanding is the ability to match many different phrases with a single intent. This is not the case with menus, which simply limit the number of options for a local trigger or have rigid rules to match speech to text responses.
As the tools for building NLU engines propagate and become easier to use, they’ll eventually eclipse the ease of setting up a rules engine. This will end up making even easier to expose APIs to Alexa, Google Assistant, Siri, or other services. In the not too distant future, asking how many Skills or Actions are available will seem like an irrelevant question.