Persuading people to commute by bike using automatic persuasion and AI

Emmanuel Hadoux
Persuadr.ai
Published in
5 min readNov 1, 2019

How taking into account people’s concerns improves the persuasion power of your arguments (and your chatbots).

Santander bikes in London
Photo by Yomex Owo on Unsplash

This article is based on our research: Hadoux, E., & Hunter, A. (2019). Comfort or safety? Gathering and using the concerns of a participant for better persuasion. Argument & Computation, (Preprint), 1–35.

I live in London. It’s a big European (for now) city, so it is fairly easy (albeit not always cheap) to avoid using a car wherever you want to go. And yet, rush hours are still a thing, so you are still stuck in traffic, hopefully on the upper deck of the bus this time.

So, basically we had this idea: can we persuade people to use a bike (if they can obviously) instead of their car to commute to work?

Before answering this question, we had to go through several intermediary enlightening steps to validate the whole approach. The results will be in the titles, but they are all built upon the previous steps so don’t scroll too fast!

1. People agree on the concerns of arguments

Any argument (or just any short piece of text really) contains one or more high-level ideas that we call concerns. It’s basically what this argument promotes or is related with. For instance, the argument “Buying a more expensive helmet will protect you more in case of accident” is related to both “Personal Economy” and “Safety”.

We have crowdsourced the creation of 8 concerns for this biking domain: “Time”, “Fitness”, “Health”, “Environment”, “Personal Economy”, “City Economy”, “Safety” and “Comfort”. We have then asked participants to associate 51 arguments with hand-crafted with one or more of these concerns. Interestingly, most people agreed on the concerns to associate with the arguments.

It means that, even if people have different stances and argue, they more or less agree whether or not an argument is relevant to the conversation. It doesn’t prevent them from being purposely deceitful though. But that’s something we’ll talk about in another article.

2. People have non-linear preferences

Behind this barbarism the idea is quite simple: even if someone prefers “Time” over “Safety” and “Safety” over “Comfort”, they might not prefer “Time” over “Comfort”. It would be logical (and mathematically easier as well) but well, humans aren’t always logical, are they?

To reach this conclusion (that we kind of knew already to be honest), we have asked participants to give us which concern they preferred for each possible pair of concerns. Only roughly 20% where “logical”. We could figure out a clear winning (preferred overall) concern for an additional 20%, but it still means 60% of them where basically being irrational. That’s how it is.

3. Even if the preferences are irrational, people are coherent with them

That conclusion is actually very important, it means that even if their preferences on concerns can be a bit all over the place, when they choose a counterargument to your argument, it matches with these preferences. This means that if you know their preferences, you know what they will say next. If you know the set of counterarguments they can choose from of course.

The experiment was quite simple: we asked participants for their pairwise preferences and gave several arguments and possible counterarguments to choose from. We’ve observed that a ratio of up to roughly 80% of the choices where made in accordance to the preferences reported. So people are being logical at the end of the day, just in their own way.

4. We can learn to predict their preferences (to in turn predict their choices)

Example of classification tree for predicting the preference over Comfort/Time
A classification tree for the pair of concerns Comfort/Time

This, on the left, is a classification tree. We asked demographic and personality data alongside the preferences on the concerns. With that, we were able to learn a tree for each pair, predicting what would be the preferences of a person we’ve never seen before.

The demographic data were basic stuff, while we were using the OCEAN model (also named the Big Five) for the personality bit. The way the example must be read is that if someone had an occupation that was labelled below 4 (that was just assigning a numerical value to groups of occupations) and an extraversion score below 5, they would most probably prefer “Time” over “Comfort”. And that was actually rather efficient.

5. Last but not least, all of that combined made our chatbot better

So we’ve made a chatbot with all of that (P) and compared it to a baseline chatbot (B) in actual automatic discussions with participants. The purpose was to persuade them to commute by bike. The baseline bot (B) was giving related counterarguments to the participants arguments without any strategy. On the other hand, we gave our bot (P) the ability to predict the preferences, hence the participant’s arguments and thus go to the most favourable line of discussion, avoiding parts where it knew it couldn’t win the overall debate.

The results are quite clear:

  • P had an additional 26 percentage points in engagement,
  • B could give weapons to the participants to attack us, i.e., arguments they hadn’t thought of, resulting in some negative changes,
  • P had more people agreeing to our goal, and agreeing more to it.

This was one of our first steps towards the full process of persuasion, from gathering data to actually automatically talking to people and we might say it was rather convincing. We have since improved this overall process and with it the results and have turned it into Pedro, our persuasion API for Persuadr.ai.

Pedro super-charges your chatbots with persuasion or can turn into a chatbot himself when needed. Combined with psychological studies (like this, or future articles there), Pedro increases your user engagement and conversion rates. Follow us on Medium, Twitter and LinkedIn to know more about automatic persuasion and don’t forget to visit our website if you want to play with Pedro, a free beta plan is coming up soon!

--

--

Emmanuel Hadoux
Persuadr.ai

A.I. PhD formerly @UCL, working on making finance move faster, co-founder of Scribe.