Voice Activation
Ubiquitous computing is slowly becoming a reality. The growing number of voice-activated devices is the single biggest step toward living in a completely connected environment. How can we expect our experiences in and out of the home to change with voice interaction, and what are the key points for building these experiences?
We started our discussion based on our experiences developing for voice interaction. During which we highlighted two key points to address before any development began — social environment and service layers.
Social Environment
The problem with voice is that it’s hard to be discrete; it’s an open form of communication, which can present real problems particularly regarding user adoption. Users may feel intimidated at the idea of having to speak out loud to an inanimate object, particularly in public where others can hear. I can’t help but have the same reaction to people who talk to their Siri in public as I did to those who wore those flashing Bluetooth earpieces in the noughties. We concluded that for voice activation to work properly the user must have a moment of public isolation or a hands-free service.
To clarify, public isolation is an environment in which the user is not surrounded by people, which might make them feel self-conscious. The home is an example of public isolation, a tourist information desk is not. However this is a grey area as public environments can be differentiated, imagine a voice-activated elevator where one would say the floor to which the elevator should travel. Although often filled with strangers, the interaction between the user and the elevator is a commonality between all users and as such is more acceptable.
Service Layers
During our development process (see ‘product spotlight’ section) we identified the type of interactions that work best in a voice-activated environment. These are what we call ‘micro-services’, e.g. requesting the music is turned on, information retrieval, turning off the lights. Micro-services are bite-sized interactions that can be executed faster with the spoken voice than interacting with a smart screen device. It’s also important to note that the returned information is usually short — such as an instruction or confirmation a service has been executed. Pinpointing micro-service opportunities in spaces, such as retail, will be important to finding key engagement points with voice.
Building on these insights we highlighted key micro-service areas with public isolation to provide examples of how voice can be integrated in the future.
Voice in the Home
The home automation market is the space in which voice-activation will really come into its own. It’s an ideal environment as there are a variety of third party devices within the home that will offer multiple touch-points for micro-service interactions. Smartphone apps and devices such as Nest and Sonos are already addressing these micro-services. Voice activation however will help streamline these interactions into a single format. Finally of course within the home the user is ‘isolated from the public’ meaning that the home is the perfect environment for users to adopt the notion of voice activation.
Interactions with home devices such as lights, the TV, and even door locks can all be voice-activated. As such, a single command could change the entire house environment, e.g. Saying ‘I’m going to bed’, could trigger the door to lock, the lights to switch off, and any other A/V appliances to turn off.
As the voice-activated home-automation market gains traction, user confidence will grow. We can expect to see voice integrated into more public spaces such as those listed below.
Voice in Retail
The retail environment offers many scenarios where users may find themselves alone and requiring assistance in some form or another. The most unique use cases we found came from the fitting room, often customers find themselves in the changing room realising they actually need one size smaller or would prefer an item to be a different colour. The process of finding a new item will result in customers leaving the shop, but what if voice activation could provide a concierge service to have these clothes brought to the changing room?
Voice-activation can be incorporated with displays and other technologies to provide the information necessary for shoppers.
For example an RFID reader could register new clothes being worn to return suggestions such as;
Customer — “What will go well with this dress?”
Fitting room voice and mirror — “I think the XXX V-neck jumper will you suit better with the XXX dress.”
Moving further, how might voice activation assist when trying to find a product in store, or can only identify a product by its description?
Voice in the Airport
The airport environment is a unique moment to engage with passengers, emotions can run high, the process can be stressful, and often there’s a lot of time to kill. This provides lots of opportunities to improve the passenger experience. Whether that’s from the moment passengers arrive at the terminal, to the retail environments, the lounges or to the actual airline itself.
Moments of isolation such as the airport lounge, where passengers may have their own booths, concierge services could be offered to bring extra class to the experience. For example, requesting food or drink without having to get the attention of a waiter.
— — —
Product Spotlight — Amazon Echo
Many voice assistants already exist, but none have gotten close to realising the possibilities of voice-activated spaces like the Amazon Echo. The Echo is a standalone personal voice assistant similar to a Siri, Cortana or Google Now but is separate to a smartphone. It differs from other voice assistants such as Apple’s Siri + HomeKit in that it offers easy web integration for third party devices. Recently we got our hands on a couple of Echo’s (still not available outside of the USA…sorry Bezos), and started work on our first client project driven entirely by voice. The developer framework allows for API and database integration making engagement with third party devices straightforward. Currently the skills (apps) developed for the Echo are limited, most returning just spoken information or music services. Having used the Echo for only a few days though, it’s clear this form of computer interaction will go well beyond its current abilities.
In Summary
Voice activation will be the biggest step in ubiquitous computing, this is more than just a Siri or Cortana service, and it’s already started in the home. As more companies enter the home automation market to streamline the multiple devices like Nest and Sonos, we can expect voice-activated technology become increasingly more advanced and spill over into new environments.
Tell us your thoughts! If you agree or otherwise share your opinion, we’d love to chat. Email hello@sharpendagency.com.