‘Sorry, I cannot understand that.’ by Siri

Talking computer? Jarvis? Hal 9000? (Not your friendly A.I) Most definitely, you had dreamt about a future of talking computer. Whereby you can engage with without getting replies like ‘I cannot understand that’, or, perhaps ‘I cannot do that.’
Everyone expects computer to understand everything they would say. Apple advertises Siri as a personal assistant that you can say anything to. But, we all know how that is far from reality.
By the numbers.

It’s a speech designer’s job to make it seem as if users can say anything. We are not talking about one user, we are talking about 2.32 billions different users. A huge number compared to the number of software developers working on in improving the solution.
There are many issues that could be improve within a year. Most common issues are, speech detection and microphone.
However, the hardest part is, understanding statements, contexts and intents. There are infinite amount of statement that can normal users could produce. Followed by intents as well. There is no way everyone says good morning the same way as billion of people does. And, not everyone have similar intents to what they have said.
Voice recognition software won’t always put your words on the screen completely accurately. It will never understand contexts of language the way that humans can, leading to errors that are often due to misinterpretation. When you talk to people, they decode what you say and give it a meaning. This leads people away from using it as the popular response are ‘I cannot understand it.’, ‘I cannot answer that.’ , ‘Sorry, but I am not able to do that.’ Worst, we are to accept whatever the response is in hope in the future some developers add that features or responses in.

Similar to us, we dislike talking to people who fails at understanding anything we said.
The underlying problem, is not speech recognition. It is having insufficient number of people involved into developing it. There must be a solution whereby normal people can develop speech recognition anywhere.

We have decided to start Project Audrey A.I to combat this problem, by allowing anyone from any background to develop speech recognition by training it through an app rather than having to scroll through Stack Exchange or github for solutions or, watching YouTube tutorials. There is no need for anyone with code knowledge to develop it.
To solve speech recognition greatest problem. Is by getting more people from various background in developing speech recognition, we could improve computer’s understanding into our contexts and intents.
Involvement is key.
Love what we are doing? Don’t just read, take part in it too by joining the beta waiting list in the website.

