Designing chatbots to fight information overload
A Google News Lab fellow‘s investigation into categorised news for Facebook Messenger.
Varun B. Krishnan worked with News Labs for eight weeks in the summer of 2017 as a Google News Labs fellow. This post is part of a series written by interns, fellows and apprentices reflecting on work experience that they completed with our team. To find about ways to get involved, visit the BBC Careers website or follow BBC Get In on Twitter.
We live in a world of transactions. Money, goods, and importantly, information. Our social media feeds, news websites and internet-driven messaging apps churn out so much content that finding exactly what you want is now akin to the proverbial needle in the information haystack.
Karen: Good morning, what would you like to know about?
You: Hey Karen. What are the Paradise Papers?
Karen: Hold on, let me check… Here you go, a few links to help you get started.
Karen: If you want to stay updated on this topic, just type ‘Follow Paradise Papers’ and I’ll send you updates as and when they come in.
You: Great, thanks!
This is where a transaction like the above would make sense. Karen, I’m sure you were intelligent enough to figure out, is a chatbot — an artificially programmed entity that (in most cases) trawls the web or a repository and fishes out exactly the information you need. While a Google search is programmed to give you thousands of results, bots are programmed to give you a handful.
This information transaction, from hello to goodbye, is something that bots are increasingly capable of handling. According to a survey by LivePerson, “38% of consumers globally rated their overall perception of chatbots as positive.” There was only a 11% negative perception about chatbots, while 51% were neutral and just wanted the problem to be resolved.
I was always fascinated by the very idea that behind a name like ‘Karen’ resides not a human being, but lines of code. This is what got me excited when I was offered the opportunity to work on a Facebook Messenger chatbot at BBC News Labs for eight weeks as part of a Google News Labs Fellowship.
The BBC Mundo Messenger bot, which was built by News Labs in 2016, is programmed to send headlines and links to the top stories of the day to tens of thousands of subscribers. My task was to build on this and add a feature where users could search for stories by category (for instance, ‘sports’) and get links to the day’s top stories in that category.
The project came with a few challenges — not all of them technical. The Mundo bot is programmed to speak Spanish, and the closest I got to Spanish was ‘Hasta la vista, baby’ from Terminator 2 (a bot reference for another time), one of my favourite movies growing up.
Because the feature would take free text input from users, the bot also needed to intuitively understand and account for typos and misspelled or missing words. Most of us at some point type something like ‘cst videos’ into the search bar and the search engine has a gotcha moment, ‘Did you mean, cat videos?’
Behind the botwork
So what would happen behind the scenes if a user typed ‘categories’ (or ‘cstogaried’) into a Facebook Messenger conversation with our Mundo bot? Ideally, the bot would give you a list of categories to select from, corresponding to the nine major topic tabs on the BBC Mundo website: News, Latin America, International, Business, Technology, Science, Health, Culture and Sports.
To get you your stories, the bot needs to talk to the BBC News feed through an Application Programming Interface, or API. The API is like a waiter who fetches your order.
In this case, the bot fetches all the stories in a particular category from the API and does two things:
- Give you the headline, a thumbnail and web link to four of the latest stories in that category;
- Store all of the stories — not just the top four — into a database for future use.
The immediate utility of the first step is clear — the bot just gives you the stories that you asked for. But what does the second step do? Why do we even need a database?
Well, the API is the busy waiter at a popular restaurant during the holiday season. He/she has a lot of requests from a lot of other people, so we don’t want to bother the waiter every time we want another helping — we can serve ourselves with our nifty bot once the food is on our table (database).
The first time a user asks for stories, we would want to retrieve a set of links from the API and store them in the database for future use. This means we wouldn’t need to bother the API for the rest of the day.
If a user wants to read more stories, they can just type ‘more’ and the bot goes back to the database and fetches four more stories. This continues until all the links provided by the API are delivered to the user. The number of stories served by the API to the bot depends on the number of stories in that category present in the news feed, at the time the request is made.
If it’s possible to deliver categorised news to users, it would also be possible to implement a dashboard for administrators to track which categories users requested the most. Eight weeks being eight weeks, I didn’t have enough time to get started on this feature of the bot publishing system.
What I learned
One of the challenges of working on an innovation team is that sometimes your ideas don’t make it in front of audiences, for reasons outside your control. I produced a working prototype while completing my fellowship, but the systems underlying the BBC’s chatbots changed before my code was able to be implemented. But like all good experiments, I still learned a lot from it.
The project gave me the opportunity to interact with three different types of people: software developers, who helped me develop and optimising code; user interface specialists, who discussed what the interface should look and feel like; and social media managers from BBC Mundo, who helped me understand how their audience perceived and interacted with chatbots.
The project also helped me add some technical skills to my arsenal: the Node.js framework, the Facebook Messenger Platform and MongoDB.
The bigger picture
News organisations now have very active Twitter accounts: according to a (slightly dated) Pew study, on average, a media outlet has about 41 different Twitter feeds. They also tweet 33 times a day on their primary handle.
By corollary, news organisations also use Facebook and Instagram in this manner, and hence the flood of information our audiences experience is continuous, relentless even.
In contrast, chatbots help channel this information and give you only what you ask for. If companies selling products to consumers can leverage chatbots to answer queries, why can’t news organisations provide news to their consumers the same way?
Let’s face it and embrace it — there’s a botpocalypse coming. From helping students choose a course at university to bordering-on-morbid ‘grief bots’ that let you talk to your dead loved ones, there’s now a chatbot for nearly everything, and media outlets have just started taking baby steps in this direction. The BBC has started taking strides with their mini chatbot army, and I for one wrapped up the fellowship with a general feeling of technological competence and contentment.