Recently, I could finally setup an Amazon Alexa I won at a hackathon during GraphConnect San Francisco 2016. I’ve been very pleased by the experience except the “only-US”, “only-UK” features such as the mobile application.
While I could definitely find workarounds on the internet, my geek inner me thought it would be crazy if I could create a mobile app, take the native speech recognition features of Siri and use them against my own Alexa skills.
I also think we will interact more and more by voice with our devices and quicker than expected. Mary Meeker’s widely read report also states that in 2016, 25% of mobile searches were done by voice.
In this tutorial, I’ll cover how to create a mobile app with Ionic and use the speech recognition API that will return us a set of possible matches.
Let’s go !
Bootstrapping the Ionic app
The first step will be to create a basic Ionic app and prepare our template.
ionic start speechy blank
“speechy” being here the name of my ionic app with the blank template. ionic serve will open up your dev live-reloaded browser page
We will now modify our template to add a “speak to me” button as well as a couple of debugging hints like :
- isListening : true/false
- an item list with the possible matches received by the speech recognition API
- a button to request the user permission such as the speech recognition and the microphone
Note that I will build only on IOS for the sake of this blog post.
Now we will need to add the needed ionic and cordova plugins for accessing native capabilities of the phone and especially the Speech Recognition API.
npm install @ionic-native/core --save
ionic cordova plugin add cordova-plugin-speechrecognition
And add the plugin to your app module imports and in your providers :
The next step is to use the Speech Recognition API in our page .ts file :
The explanations are as follow :
- In the constructor, we check if the app has the necessary permissions to use the speech recognition API, this will popup the native modal for your approval as well as for the app to use the microphone.
This makes use of the hasPermission and getPermission methods.
- When the user clicks on the button, it checks if the mode is not already in listening mode, if yes then it will stop the listening mode and return.
(the stopListening API method is only available on iOS)
If not, then it request the startListening mode from the speech recognition API and subscribe to it in order to store the text understood.
The returned value from the API is an array of strings (sentences).
- We make use of NGZone in order trigger the update of the elements.
Unfortunately, the native speech recognition API can not be tested in the browser, we’ll have to actually deploy it to our phone with XCode.
Building and testing the app on your phone
As I’m building for iOS, we’ll make use of XCode, make sure to have it installed before.
Prepare your application :
ionic cordova platform add ios
ionic cordova prepare ios
ionic cordova build ios
Open then XCode and open your project platform’s build
In XCode, you will have to select your personal team with your personal account.
Normally you should encounter your first error. A standard Ionic starter app has default settings as the app namespace set to io.ionic.starter. Change it to something meaningful for you, for me it will be io.willemsen.app for eg.
Change it an re-run the prepare and build commands.
Next step is to plug your iPhone to your Mac and choose it as target in XCode
And click on the Run (play) button.
This will build the project, transfer it to your phone and launch it on your phone.
In the XCode console you can also inspect the logs of your app :
2017–06–02 22:27:23.866235+0200 MyApp[2612:759736] Finished load of: file:///var/containers/Bundle/Application/A5F9C028–8737–49CA-A1C5-B8D35ADB35F3/MyApp.app/www/index.html2017–06–02 22:27:24.008490+0200 MyApp[2612:759736] Ionic Native: deviceready event fired after 836 ms
Once your app is launched, click on the Get Permission button and you will be prompted for it :
Then click on “Speak to me” button, express some sentences and click the button again to stop the listen mode :
The app will then display a collection of possible sentences understood by the speech recognition system, you can find them in the logs as well :
2017–06–02 22:43:09.667891+0200 MyApp[2657:766339] listen action triggered
2017–06–02 22:43:09.667974+0200 MyApp[2657:766339] listening mode is now : false
2017–06–02 22:43:09.667991+0200 MyApp[2657:766377] stopListening()
2017–06–02 22:43:09.888600+0200 MyApp[2657:766339] startListening() recognitionTask result array: (“Can I have two cappuccino”,“Can I have to cappuccino”,“Going to have to cappuccino”,“Can I half to cappuccino”,“Going to half to cappuccino”)
2017–06–02 22:43:09.888688+0200 MyApp[2657:766339] startListening() recognitionTask isFinal
How cool is it ?
You can find a short video I made with quicktime for a brief demo of the app here : https://youtu.be/i1DZpVqSqQI
I’m very impressed by the simplicity Ionic makes it to build mobile apps, integrating voice opens up a new dimension of possibilities, especially in combination with our research about knowledge graphs.
Also, it offers the possibility to combine all of your other devices like Alexa, Raspberry PIs, .. and voice-control them with a single app. I’m looking forward to blog more about this.
You can find the full code of the app on my Github : https://github.com/ikwattro/ionic-speechy