Hello from AssemblyAI
Recent advances in Deep Learning have made speech recognition much more accurate and reliable. This has opened up the possibility for unique and creative voice interfaces, and voice features in general, that actually deliver a good experience for customers.
As a consumer, you can see this with products like The Amazon Echo, Siri, and Google Now. The Echo has been a runaway hit, for example, selling millions of devices and receiving rave reviews. A product like this would not be possible 10 or even 5 years ago, in large part due to the accuracy and robustness of the speech recognition technology required to deliver you a good experience.
This technology shift is exciting, because it has the potential to create many new experiences and products. Both for consumers, and for businesses.
Unfortunately, getting access to this new technology isn’t simple. Sure, you as a consumer have access to Apple’s speech recognition on your iPhone, or Google’s speech recognition in Chrome, or Amazon’s speech recognition on your Alexa. But getting access to this new Speech Recognition technology to power your own company or startup’s project is still difficult to say the least.
Minimum customization options, bad API documentation, lengthy legal processes, and high costs are just some of the hurdles in place today to creating unique and exciting new products powered by voice. If you’re not a giant technology company that can afford to invest in creating proprietary, cutting edge AI technology like Speech Recognition, your options are limited and frustrating.
We want to remove these hurdles, and make cutting edge Speech Recognition technology easily accessible for everyone from independent developers to a global companies. We’re delivering on this with a highly accurate, robust, and extremely customizable Speech Recognition API.
Our Speech technology is powered by a proprietary, Deep Neural Network architecture, a large internal dataset built from the Internet (more on this in a future post), and is always improving.