Or, “Alexa, what’s the wifi password?”
At betaworks, we like to go very deep in a set of specific categories so that we understand the nuances of a particular interface. Last week, we started voicecamp, our accelerator around verbal computing (Lightspeed, GGV Capital, and Launch Capital all invested in voicecamp companies alongside betaworks).
In anticipation of voicecamp, my friend Or Arbel and I built an Alexa skill to learn what nuances existed in voice-only products. In particular, we were interested in experimenting with onboarding. The skill does one thing: tells you the wifi password for the room you’re in.
“Alexa, ask Wiffy the wifi password”
You can install it by saying “Alexa, enable Wiffy” or clicking here https://www.wiffy.co
(update: it’s on @ProductHunt here please check it out)
I thought I’d share a few lessons learned:
- Naming — When you call your skill “Wiffy” you think everyone will know how to pronounce it. They don’t. Whiff-E. John Borthwick predicted this would be an issue and he was right. We still went with the name, but it would have been really helpful to have a user testing tool. There are new ones such as SaySpring, which help with this.
- Approval by Amazon — The Amazon skill approval process is close-ish to the Apple app store process. They take a very close look at the skill. In our case, they even flagged an easter egg, noting that we didn’t have any documentation about a particular feature (hint: try saying “Alexa, ask Wiffy to turn on Donald Trump mode”. Or Samuel L. Jackson mode.
- Analytics — how are we going to tell whether people are running into issues? We used voicelabs for this.
- Porting — Once we finished up the Alexa skill, we wanted to make a version for Google Home. We can use API.ai for this, but I suspect we could have started there and developed for both.
- Account Linking — This is a SUPER clunky process. Given that we’re asking for people’s wifi credentials, which are often not pronouncable (i.e., not ideal for inputting via voice interface), we instead decided to use text messaging. So we ask you for your phone number and then complete the rest of the onboarding process via text. This solution felt more elegant than the account linking feature Alexa offers, since it’s not ideal to ask people to go into the alexa skills app, search for our skill, tap “account linking” and only then enter the info.
Using SMS + Voice as a UI
I think it’s worthwhile to share the onboarding flow in more detail, as it really reduced friction for people like my parents, who don’t understand how to do account linking with Amazon. I’m writing the flow down vs. using screen shots since, well, some of it uses voice, which isn’t really screenshot-able.
We start out welcoming people to the skill and ask them for their phone number. We had to update this to ask them for their mobile phone number. Since many people have an Echo in their home or kitchen, the default thought process wasn’t necessarily to add their mobile phone number.
They literally say their phone number out loud and then get a text message from the Wiffy skill that asks for their Wifi Name. This is a pretty cool experience that most people hadn’t seen before or really contemplated.
Typing in their wifi credentials via text turned out to be a great way for people to enter their credentials quickly. The final text the user gets says “Awesome! let’s try it. Just say “Alexa, ask Wiffy what is the WiFi password?”
So now they 1) get to experience it, and 2) have a record of how to invoke the skill stored in their SMS. As developers, we also have a channel to reach people and get user feedback, let them know about new features, etc. We obviously need to be extremely respectful of texting a user unprompted, but having any channel to update them is particularly important when there’s no other way to push notify a user of a change, problem, or for customer service in general.
I’d love to get feedback on the skill. You can try it out here: https://www.wiffy.co
Interested in verbal computing? Sign up for my newsletter hearingvoices.xyz