When AI Meets Behavioural Science

Behavioural science and AI: the current state of affairs

Since the publication of Nudge, the impact of behavioural science on all kinds of organizations has been growing exponentially, worldwide. Recent advances in technology are being integrated into behavioural science, compounding its rapidly growing impact in areas ranging from finance to policy. “Artificial intelligence” (or “AI”) is now part of our vernacular, though it may still be worth clearing things up before we start. Briefly, AI refers to the theory and development of systems that can perform tasks which normally require human intelligence.

AI tools can be employed to improve behavioural science theories and applications, an increasingly strong trend in academia and consulting. However, the relationship between behavioural science and AI can be far more interdependent. Though it is much less developed, insights from behavioural science can be incorporated into machine learning algorithms, so that what the computer learns is related to human behaviour. In this article, we will overview the use of AI in behavioural science, and the reasons for more interplay between the two disciplines, before we briefly touch upon related ethical issues in the last section.

What AI does for behavioural science

Recall being a teenager, wanting to go to a big party for the first time, and needing to convince your parents to let you go: would you use the same techniques to persuade both? Or would you use some words and intonations to butter up your mom, while planning on a completely different conversation for your father? You know what would work best for each parent, and the reason for that is that you know them, and so you know which approach they would best respond to. We are decades away from building machines that know us in the human sense, but our predictions become more accurate once our databases are bigger, more precise and more widely employed. In behavioural science and sales, methods based on specific individual characteristics are called personal profiling.

Gathering more information doesn’t necessarily need to be about manipulation like we often hear in the news. Getting to know us might mean these algorithms can understand what’s good for us, which can then be used to help us prevent easily avoidable mistakes. Beneficial examples already exist such as precision medicine, smart toilets that spot cancer and selfies that can detect heart diseases. Professor Chris Toumazou at Imperial College London even took it one step further by opening the field of nudgeomics and co-founding DnaNudge. The company’s DnaBand makes use of the information contained in your genome to help you find the healthiest products for your body while you are out grocery shopping, and prompts you to be more active if you have been too sedentary.

Developing more human AI

So far, we’ve seen how AI gathers information to make choices that are all about finding the most appropriate answers to a given question. But, as behavioural science suggests, there’s so much more to decision-making than just rationality. We’ve actually known for a while now that feelings are crucial when it comes to choice selection, and basing our decisions entirely off logic would result in a complete disaster. Humans are actually quite irrational, and predictably so, in Dan Ariely’s words. Therefore, an AI system that completely lacks emotional intelligence can risk being inaccurate, if not dangerous. Innovative companies such as Affectiva and Empatica are at the forefront of technologies that can read and utilize information about our emotions. Uses of emotion-reading systems range from cars that sense what’s going on inside the cabin, to glasses that help autistic children connect with their parents– not to mention the potential of endowing robot caregivers with more human-like sensitivity, a strange but relevant note since it seems like robots will soon populate retirement homes. These examples begin to indicate how insights from behavioural science can be used to further tailor AI to our needs.

What behavioural science can do for AI

However, despite behavioural science assisting efforts to make AI more human, the true potential of behavioural science to help guide the development of AI has been somewhat disregarded, perhaps wrongfully so. As researcher and entrepreneur Nurit Nobel said, “These algorithms are designed to facilitate choice by humans. Considering this, it is surprising to discover how little research in human behaviour normally goes into their design.” Leveraging insights from behavioural science more than their current degree of use could make AI tools even more efficient in guiding decision-makers successfully.

An example of where this could be useful is in helping us overcome human limitations in the development of AI. After all, the people who develop all these inconceivable AI innovations are still… people! Unconscious biases can guide the creation of AI, with harmful results on occasions, such as hiring tools that disadvantage women, and racist image recognition systems. By making us more aware of our shortcomings, behavioural science can prevent contaminating software with the same inaccurate biases that guide us.

Yet another reason for bridging behavioural science and AI is to support the formulation of algorithms that humans can trust, which I’ll expand on below, where we talk ethics.

Let’s talk about ethics, baby!

The most advanced machine learning models we have at the moment rely on huge databases. This already brings about the question of data collection methods and morality, related to privacy, but there’s also the larger concern about AI “knowing” and nudging us. It could be argued that AI-based nudging does not introduce any new moral issue. However, along with the wider ethical considerations regarding the relationship between nudging and free will, a unique dimension to this ethical debate relates to AI being engineered. How can we rely on a black box that makes mistakes every now and then without any acceptable justification? How can we delegate a job to something that reasons in a way we cannot possibly follow or understand?

To address these ethical issues, many researchers are turning to explainability– a hot topic in AI ethics. The idea is that engineers working on the development of models should be able to explain how conclusions are reached. Explainability would, firstly, buy more trust from human users by addressing the behavioural factors at play and, secondly, allow us to improve our policies by providing new insight from measures that are already proven to work in the real world. Perhaps most importantly, it would provide a measure of inspection for legal use.

All in all, though accompanied by many ethical questions, the meshing of behavioural science and AI could help make technology more human, leading us to smarter, healthier, wiser choices; hopefully, for the better.

Want more? Stay tuned! And in the meantime, here’s a little more to read for the curious and impatient:



The official blog of the UCL Behavioural Innovations Team, where UCL students can publish articles on anything related to behaviour that has captured their imagination. Our writers, though from various academic backgrounds, all share a collective interest in behavioural science.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store