While voice data analytics are key to understanding what Alexa skills are getting used (and how), the thing that your article resonated with me most was just the realization of how analytics might even get collected for conversational UI and machine learning.
Let’s think about this in broad terms of digital projects. Today we have massive tagging plans for analytics, and whole suites of products exist to analyze this data. We have solutions like heat maps generators and mouse activity analyzers, etc.
What do we have for this new voice command frontier — NOTHING. As you mentioned, we are at the mercy of Amazon to even provide any of this data. And even if we were to try and implement our own analytics data collection solution, (currently) is that even possible? How would that work?
I think you just opened up a brand new niche space for SME’s and product specialists.
So I also would like to know what your opinion is on the addition of the screen and camera next month when the Amazon Echo Show comes out. Now we don’t just have voice analytics to deal with, we have touch and camera input. Analytics in the future for “real time experience” will go far beyond the voice commands.