How we built an interactive chatbot in a week
Last month we put the finishing touches to our Triggers API. We built the API to identify relevant information (triggers) within any unstructured data set, so that bot developers could build more sophisticated chatbots that proactively alert a user to a piece of contextual information. To test the API out we employed the expertise of Microsoft for a week to help us hack a bot together using their Bot framework.
Why is interactivity so important?
Chatbots have historically used keyword matching to map a question to a predefined response. This has limitations for both the developer and the user. Developers have to craft extensive word lists to capture all possible ways in which a query can be expressed, which inevitably break down as the bot is extended to handle more than a handful of intents.
Users end up frustrated with the dumb prepared responses, and can sense the limitations of the bot they’re chatting too.
If the future of chatbots and all conversational interfaces is going to be as bright as some people think, then we need to move towards more intelligent interactive bots that demonstrate intuition and contextual awareness. To do that a bot needs to 1) understand users’ questions regardless of how they are worded, and 2) generate answers to less common questions on the fly, using data from more sources than a CMS or database. It needs to search everywhere, just as a human can.
Triggers API is a step in that direction — we use Machine Learning (ML) and Natural Language Processing (NLP) to identify relevant content related to both an entity and a topic within any unstructured or structured data set so we can provide context and further information in a response. This is only a small part of the way there as the data sources we use are limited. However we are working to expand these but we also need more data and more feedback to improve the contextual awareness. If a bot is going to interrupt a user with a notification, it needs to be 100% sure it’s accurate and relevant.
The hackathon challenge
Before releasing our API into the wild, we thought it best to road test the features and usability. We decided to run a hackathon, for five days, with two of our engineers. The objective was clear:
Build a chatbot within five days, using our Triggers API, that interactively delivers news on any chat platform.
No mean feat!
Choosing a Use Case
Most of the use cases we came up with for the chatbot were around b2b scenarios. We imagined a financial trader receiving alerts about their portfolio stocks tanking, or a supply chain manager being alerted to a risk along a trade route.
The winner was Recruitment. Speaking to a number of recruiters (yes they call us all the time… the shoe was on the other foot!), we quickly determined a need for a virtual assistant that would alert a recruiter to worthy trigger events. That included events such as:
- Personnel Changes: someone leaving a company, being fired or role changes
- Investments: a company raising fresh capital to spend on new recruits
- Acquisitions: if a company acquires or merges with another there are normally personnel movements, or potential for new hires
- Expansions: a company expanding into a new region usually means new hires
We trained our ML models for these trigger events, and plumbed them into the API.
Choosing our bot framework
Given the time constraints we wanted to find the easiest framework to setup and get running within a few hours. With not very much deliberation, we landed on Microsoft’s Bot Framework, half persuaded by their generous offer of support and half due to its flexibility. It also came with the added benefit of their NLP toolkit, LUIS, used for creating natural language intents. Plus the cross platform compatibility meant we could test the bot in a range of platforms, but settled on Slack and Skype, with a stretch goal of a Cortana Skill if time permits… yeah right!
Reaching out to Microsoft with our plans we were able to convince a couple of their engineers to join us for the whole hack week! Luckily both Toby Bradshaw and Petro Soininen Microsoft Commercial Software Engineering had a lot of experience with both the Bot framework and LUIS, and even offered to host us in their Paddington offices.
Conversational Scenario Mapping
The stage is set, we’ve got the engineers, we’ve got the experts, we’ve got our shiny new API, and we’ve chosen a framework. We’ve also nailed the use case, and have enough coffee, pepsi and popcorn to fuel a Mongol army.
What we don’t have is a low-fi prototype to base our development around.
Out come the whiteboards and markers! After half a day of prototyping we sketched up a user conversation flow that looked like this:
The prototype: the user creates an alert for a particular entity, say ‘IBM’ and then a particular recruitment topic, say ‘Personnel Change’, through a chat dialogue. Also setting frequency of alerts; As it happens, Daily or Weekly. Once the alert is set up, a user waits for a trigger event to occur, before receiving a proactive alert within Slack with a summary of the information around the event.
You can see from the diagram, this is quite a simple structure, of ‘Setup Alert’, ‘Delete Alert’, ‘List Alert’, and some other administrative controls. The real magic is in the Triggers API that is fetching the relevant content in the background.
Giving the bot a voice — LUIS
Building a bot, the most important thing is that the interaction is natural for a user. Working with Microsoft for a week we had access to the LUIS toolkit, and were able to quite quickly build natural language responses to the main commands we’d mapped out in the scenario mapping.
LUIS was very easy to work with. The UI is clean and simple, with just one click to create an intent, add utterances or select entities.
To our surprise, it reached high performance quickly, with just a few tens of examples per intent.
We were able to train all the intents we needed for a bot in an afternoon. This is probably partly due to the simplicity of our bot (<10 intents, <5 entity types), but also partly due to the power of the underlying machine learning in LUIS.
We also found the “Suggested Utterances” tab very helpful, in order to improve on the models with a bit more training — This was something we would come back to, as we only had a couple of days left in the build!
Putting it all together
The rest of the build was very straightforward. We were able to get an end to end response within a day and a half of hacking. Most of the endpoints we already knew, as we’d built them! We chose to host the bot on Azure, since Microsoft offered us some freebies, and it was a chance for us to play around with something new.
Once our bot application was installed as an Azure web service, we registered two channel-specific handlers on Slack and Skype respectively, and authenticated them with the Bot Framework. The advantage of Microsoft’s offering is that it takes care of sending and receiving messages on each platform for us, leaving us with more time to focus on our own code. An end -to-end diagram of a user interaction looks like this:
Slack — — message — →
Bot Framework — — standard message — — ->
LUIS — — intent + entities) — →
Bot Framework ← — response
Slack ← — response — —
Stretch Goal — Cortana Skill
With a full day left and a fully working prototype that was proactively responding with relevant(ish!) triggers, and with one more day of Microsoft’s expertise at our disposal, we decided to push ourselves for the stretch goal — building a Cortana skill.
The flexibility of the Bot framework meant that we were able to reuse a lot of the work we’d already done, save for a bit of Cortana specific configuration. However, the biggest issue was a design one. We had to consider scenarios for a “headless” experience. All commands and responses would be audible only.
We re-designed the scenarios, as you can see here, that ensured a user wouldn’t be waiting around while a long textual list is read out. Then we incorporated our Skim API to summarise the relevant triggers down to just a few lines of text, which made the Cortana Skill a lot snappier. Another great use case for another great API (shameless plug!).
We set out to build a bot within five days that would proactively notify a user to an event using our Triggers API. In that regard we were successful. Beyond that we were able to test out our Triggers API, while making changes to our documentation (the real reason for doing this all along) as we went through the process of building with it. Which in turn was a valuable lesson for our engineering team to identify areas we can improve on, and start thinking more about the future. Triple success!
Petro from Microsoft who worked with us on the project said: “Skim’s technology and the Triggers API product are great examples of combining bespoke AI technology with various Microsoft products such as Cortana, Bing and Cognitive Services. Running a joint engineering project to integrate Microsoft Bot Framework to Skim’s stack was a productive way to add an intuitive multi-canvas interface to their product offering.”
The biggest surprise for us was Cortana. She/It was a real eye-opener for us into how valuable a virtual assistant (VA) could be for not only consumer applications but also business. Imagine a simple b2b scenario — a salesperson driving on the way to a meeting with a client, when an alert comes over the car stereo, informing them of a deal their customers just signed with their competitor; information they can react to immediately at their meeting. All of that could happen in a world of connected services and data bringing contextual information to a user via voice, something Triggers API is built perfectly for.
We’ll be making the recruiter bot available via Slack (for free) soon, there’s a bit of polishing to do before that happens, check back soon, or visit triggers.ai to get on the waiting list.