Designing ReminderBot (Part I)
A mini-adventure in conversational UX.
I have been intrigued by the potential of designing in a conversational context for a while. My curiosity was piqued over the summer by Adrien Zumbrunnen’s excellent Medium article and impressive portfolio website. Over the past six months, momentum behind conversational UI and messaging apps has only grown. Apple’s September iOS10 update brought app-like functionality to iMessage, and Facebook announced just this last Monday that there are now 33,000 bots on its Messenger platform.
Initially I was most excited by the inherently social nature of messaging platforms — the social context suggested lightweight software tools that wouldn’t make sense anywhere else.
Along these lines, my first idea was PromiseBot. This tool would be a light-hearted way of friends keeping tab on the promises they made to each other.
The screens above show the gist of how this would work. The bot detects when the word ‘promise’ is used during a conversation between two friends. It then prompts the user to invite it into the conversation thread to set up the promise details. Once set up, the software ‘promise’ would exist there for as long as required, visible to both parties and able to send out reminders as necessary. Fun animations could communicate a promise being kept or broken. ‘Empty’ promises could be quite literally empty..
Focus on the conversation
PromiseBot was a good start. But I quickly came up against limitations in my ability to prototype and build such a piece of software. I was also conscious that Apple specifically forbids iMessage apps from ‘listening’ to conversation content in the way that PromiseBot would need to do in order to function as I intended. It was also clear that Facebook currently conceives of bots as stand alone entities that a user interacts with one-on-one in a separate thread rather than as an occasional interloper.
So I stored that idea away for the future and pivoted towards a simpler piece of software. Enter ReminderBot.
If PromiseBot was an investigation of what the social context of messaging threads affords design, then ReminderBot investigates what the conversational character of the interaction makes possible.
Specifically, the idea was for a tool that would nag you to do the one task that you had really been putting off. Interaction in a message conversation feels (in theory) much more like that with an actual human being and so the concept was to leverage this to really motivate a user to complete a task. Could you design something a piece of software as effective as a nagging friend or parent?
There are of course hundreds of reminder or to-do apps out there, but they tend to be quite transactional in nature. By giving the ‘reminder’ some personality, by enabling it to talk back, I thought I could bring some fresh thinking to this space, all while learning a bit about what does and doesn’t work in a conversational UI.
Everyone occasionally needs a bit of a nag to get stuff done.
ReminderBot will learn the task that you are putting off and by pestering you in an uncannily human way, help you get it done quicker than you would do otherwise.
After checking out the various options out there I decide to design and build the bot using Chatfuel. There are a number of tools out there requiring different levels of technical ability (check out api.ai and botsify for example) but I went with Chatfuel as it seemed to have a manageable learning curve for someone totally new to bots like me.
An experiment within an experiment
It felt only right that ReminderBot itself should be involved in the user research from the very beginning. I had been reading interesting things elsewhere on Medium about the potential of bots to participate in the discovery phase of the project and wanted to try this out too.
I therefore designed a basic form in chat format to find out what kinds of tasks people tended to put off most and what their feelings were about being pestered by other people to do stuff. I wanted to see what effect talking to a non-human interlocutor had on the quality of the information that could be gathered.
I released this out into the world with a carefully worded introduction to explain that ReminderBot was helping out with user research for it’s own design, an admittedly unusual concept.
To (hopefully) drive engagement I wanted the bot to come across as fun and friendly so made use of emojis and informal language. I also made heavy use of the ‘typing’ animation to give the impression that the bot was listening and responding. Having bot messages arrive instantly I quickly realised was very jarring.
In the Wild!
It was very interesting to see what came back when I put ReminderBot out on my social networks. The novelty of it certainly lead to a high number of participants. After releasing the link to it on my social networks I had more than thirty responses within 24 hours.
Respondents complained about parents and spouses, reflected on their lack of self-discipline … and made jokes.
There was perhaps predictably quite a broad range in response quality. Some people just wrote single word responses to every question, or exclamations like ‘lol’ or ‘what???’. About 10% of participants lost interest and left before the form was complete.
Other people wrote extremely full answers, really opening up to ReminderBot — several responses exceeded 50 words. People spoke in a way that they would to an actual person with one respondent even apologising when two of his answers overlapped slightly in their content. Overall the detail and intimate nature of much that was disclosed was quite striking. Respondents complained about parents and spouses, reflected on their lack of self-discipline or mood swings and made jokes.
The question script was designed to gradually increase in length and intimacy, from just asking the respondent about a task they didn’t like doing, to asking about times when they had nagged people and it had (or hadn't worked). This gradual progression seemed to be important in building trust and a sense in the respondent that they were actually being listened to by the bot.
My hunch also that the typing animation was important in building this trust was proved by some of the answers, which rather than answering the question at hand, instead commented on how quickly the questions were coming in (see below). Annoyingly the typing animation can’t be used during user-input forms with the current Chatfuel software so there is no way of solving this particular trust issue at the moment.
Overall though I was really encouraged by how much interesting and useful information I was able to gather in this completely automated fashion. Whilst the depth and detail of the responses was lower than would be possible to gather at interview (you can’t follow up on interesting tangents for example), it was still more in-depth than a standard survey form would be and in many cases the conversational form invited more intimate and open-ended feedback than I could have anticipated.
As a form of user research I would suggest it sits somewhere between the two. Combining the reach of a survey with the greater depth of an actual interview, in some contexts I can see it being a useful compromise and I intend to keep experimenting with it in the future
The Actual Research
With so many interesting insights about how people react to divulging personal information to a chatbot it was easy to forget that the main point of ReminderBot was to help gather user research about people’s current reminder behaviour and their attitudes towards being reminded and nagged about tasks. So what were these attitudes and behaviours?
The research suggested that there were two main types of activities that people needed reminding to do. The first, perhaps unsurprisingly, were activities that people find intrinsically boring, but that need to happen fairly regularly. Activities like ironing, dish-washing, and putting the bin out came up a lot even in my limited sample size.
The second type of activity that people needed reminding to do was more aspirational — activities like making a portfolio, meditating and so on.
To do these activities my respondents were currently relying on a mixture of lists, deadlines, nagging from others and and good old-fashioned self-discipline. Some people, it turned out, hated nagging of any kind whether they were doing it themselves or were on the recieving end. For those who found it useful, it was clear that timing and tone was crucial. I summarised my findings as a set of design principles listed below:
A Good Reminder ..
.. is friendly.
.. comes at the right time (and at the right frequency).
.. makes its value clear to the person being reminded.
.. is a proposal rather than an instruction.
.. is prepared to offer flexibility.
These statements immediately began to suggest ways that ReminderBot should work. Careful wording and timing, and the flexibility to renegotiate to some degree would all be important ‘features’ in order for it to feel useful to people.
Part II (coming soon)
With a clearer idea of the contexts in which ReminderBot might be useful and a better idea of what people’s preferences were when it came to being nagged I was ready to start prototyping and testing some specific ideas. You’ll be able to read all about this my next Medium article — coming soon.