Accessible UWP apps — Chapter 2: voice control & narrator support

Niels Laute
4 min readMar 10, 2019

--

This is the first part of a series of articles that can help you to create more inclusive UWP apps and make you look cool while doing it!

In the first part of this series, we enhanced our app with gaze control. This allows user to control their apps with gaze. In this article we’ll explore another input method: voice commands. Of course, the UWP platform has a long history of Cortana support to launch and control apps, but in this article we’ll be looking at in-app voice commands to control your UWP apps. Really useful for users with disabilities or as an input method on e.g. a kiosk app. And since this app will run on Windows IoT Core as well, it makes a great way to control your installation!

Voice control

Let’s get started. For this demonstration, I’ll be using the same code sample as last time: HomeHub.

We’ll alter the code so that the app opens up the microphone and the user can call out a room name to navigate to the details page. Next to this, we’ll incorporate the voice command ‘select’. The code will then check which room has active focus and will navigate to that specific page.

To enable voice-control, permission is required by the user to use the microphone. To enable this we need to add the Microphone capability in the voice manifest:

Now we’ll need to create a new instance of the SpeechRecognizer class and adding some event handlers:

In the above code we start a new ContiniousRecognitionSession. The event handler HypothesisGenerated is triggered whenever the voice command starts and will adapt while it’s processing speech. Whenever the system is pretty sure a word is heard, the ResultGenerated event is triggered. And lastly, the Completed event gets called whenever the session is over (e.g. after the ResultGenerated event). We can use this event to restart the session so we can use multiple voice commands.

The Dispatcher is required if we want to do UI stuff.

For illustration purposes we’ll do a simple .Contains() to see if the room name is mentioned, and if so the room will be selected. To do that, we’ll put in the following code in the ResultGenerated event:

Let’s test it.. success! Now on the details page, add similar code to go back (e.g. “Go back”).

I also added a check for the ‘select’ voice-command on the MainPage. This allows the user to say ‘Select’ and the current selected (= has active focus) room will then be use to navigate.

Now, let’s see if it works. The first time a Microphone access dialog will come up that needs to be accepted.

Narrator support

Now that this works, let’s see if we can optimize our app for Narrator support as well. Narrator is a useful feature for users with limited visibility and reads elements on the screen and processes this with text-to-speech.

When you turn on Narrator for HomeHub, you’ll hear the voice describing the rooms list as ‘models:Room’ instead of the actual room name. Luckily, this is easy to change. With the AutomationProperties API we can change the description of the element. In our case, we’ll bind it to the Name of the room.

Next time, Narrator will call out the right room name!

Conclusion

Voice commands can be very useful to control your UWP app while making it more accessible. And in combination of other input methods (e.g. gaze) it can be a very powerful method to speed up your UI.
This article shows how easy it is to setup simple speech recognition. Of course, if you want to target more natural language services like LUIS.ai can really help — and it’s pretty easy to implement into the current SpeechRecognition UWP code.
As always, the source is available on GitHub.

--

--

Niels Laute

UX designer by day, Windows developer by night. Talks about Fluent, XAML and UWP a lot.