‘Alexa — what’s on at The Royal Opera House?’ : Lessons from our explorations of voice recognition tech

tom.nelson
Royal Opera House Audience Labs
5 min readJun 6, 2017

--

Bryan Hymel as Aeneas in Les Troyens, The Royal Opera © ROH/Bill Cooper, 2012

By Tom Nelson, Head of Audience Labs, Royal Opera House

Here at the Royal Opera House, we’re always looking for ways of giving our audiences the information they want, in the way they want it. We’ve been following the proliferation of Amazon Echo and Amazon Echo Dot hardware — over 8 million worldwide sales and counting — and the associated ease of being able to create bespoke Alexa Skills for the service.

We use the market-leading Tessitura software to handle all our ticketing and customer relationship management activities and were keen to find a way of integrating Alexa with Tessitura which might benefit the wider cultural sector.

So…. We’ve tried to create an Alexa Skill that pulls live data from our Tessitura API without affecting system performance or impacting security.

You can check out the skill yourself on the Alexa Skills store.

We think voice-based interfaces are going to become a lot more important — as we can see from Apple’s fresh announcement of the HomePod smart speaker — and developing our own Alexa skill is a great way to get practical experience in this emerging area of consumer tech.

The Royal Opera House Alexa Skill

In the rest of this blog we’re going to go into more technical detail on how we built and tested the skill, and talk briefly about our future plans for developing the skill further. It’s authored by our lead developer Nicola Pietroluongo. As such, it gets quite techie…

How did we build it and what did we learn?

The skill has been built with the javascript runtime Node.js and several Amazon Web Services (AWS) services. We’re not going to give a step by step tutorial on building an Alexa skill here — there are plenty of good ones online, including the official guide from Amazon here. Instead, we’re going to focus on two specific challenges we had to tackle: parsing dates from natural language, and integrating with the Tessitura API.

Date interval

The first challenge was to convert a wide range of words that indicate dates (“tomorrow”, “next week”, or “July”) into a date format (such as “2017–06–01”) that could be used for search. Using the AMAZON.date slot type in the skill’s intent (more information here) helped to convert words like “next week” in notation like 2017-W49 — this is in the ISO 8601 date format and means ‘week 49 of 2017’.

Alexa skill development. Not quite as complex as an opera production.

The AMAZON.date value cannot be used directly to create a valid Javascript Date object, so we decided to create a JS module to handle the conversion. The module’s name is Amazon date parser available as an open source project via npm (see: https://www.npmjs.com/package/amazon-date-parser).
The module converts AMAZON.date inputs in “date range” objects. For instance, the aforementioned 2017-W49 will be parsed in something like:

{ startDate: 2017–12–04T00:00:00.000Z, endDate: 2017–12–10T23:59:59.999Z }

The date range object makes it easy to query an external API or a database to get performances happening between a specific range of dates. In addition, the Amazon date parser module includes an elegant way to handle invalid or unsupported dates.

Tessitura integration

The second challenge was to find a way to “link” the Alexa skill and the Royal Opera House Tessitura system without affecting performance or security as mentioned earlier.

Tessitura is a CRM platform developed specifically for the needs of arts, cultural and entertainment organizations. All our performance data is stored in Tessitura.

The Royal Opera House ‘Bridge of Aspiration’

To query the Tessitura APIs and retrieve information about performances, we decided to use the Amazon API Gateway service as a “bridge” between the Alexa skill and Tessitura. Amazon API Gateway manages and hosts a RESTful API to expose AWS Lambda functions (which are small pieces of code you can run ‘on demand’). Furthermore, HTTP endpoints can be easily configured, secured and cached.

We are using a variety of Amazon services already, but we particularly love AWS Lambda for its ability to automatically scale without requiring to provision or manage servers — taking away most of the administrative issues around performance so we can focus on developing useful functionality.

What does our skill do?

The initial beta version of our Alexa skill allows you to ask the Royal Opera House for performances on a specific date.

Things you can ask:
“Alexa, ask the Royal Opera House what’s on today”
“Alexa, ask the Royal Opera House what’s on this weekend”
“Alexa, ask the Royal Opera House what’s on the 12th of July”

We’re planning to add more features soon and have been carrying out some initial user testing to understand how our audiences could use our Alexa skill or other voice interfaces.

User Testing

Amazon have a comprehensive guide with suggestions on how to evaluate the skill.

We invited an enthusiastic group of about a dozen of our ROH Student Ambassadors to put the skill through its paces. We tested whether they could launch the skill and invoke a response, how the skill handled errors and provided help. We also asked our student group what other features and information they’d like the skill to provide next.

What’s next?

Whilst the skill is fairly basic in this initial beta version, we’ve done much of the hard work of integrating with Tessitura. Now, the next step will be to return live ticket availability and synopses. And if you’ve got ideas or comments on our Alexa skill we’d love to hear from you. Write a comment below.

Nicola Pietroluongo, Lead Developer

Tom Nelson, Head of Audience Innovation Labs.

--

--