A driverless world

How does Artificial Intelligence affect human emotional wellbeing?

Co-authored with Chris Merritt

What is AI?

Artificial Intelligence (AI) — broadly defined as the ability of machines to think and replicate cognitive tasks of humans — is spreading around us. Be it targeted advertising on social media, filtering applicants for a job, determining air ticket prices, controlling your central heating through voice recognition, creating cultural output or regulating traffic flow, AI is performing an increasing number of tasks in human life. [1–2]

Where is AI going?

By the end of 2017, Elon Musk confidently predicts that his driverless Tesla car will be able to travel safely coast-to-coast across the US with no human input.[3] Social robots — AI-based machines who live with us — may routinely perform many domestic or care tasks within a decade.[4] And by 2050 it’s widely estimated that we will have progressed beyond these specific applications and achieved Artificial General Intelligence (AGI).[5] AGI is part of the so-called ‘singularity’: the point at which computers can out-perform any human at any cognitive task, and where human-computer integration is commonplace.[6] What happens after that is anyone’s guess.

Benign scenarios include humans having computer parts within their bodies to help process data more quickly. The ‘neural lattices’ envisaged by some in AI would act as a kind of ‘extra cortex’ on the outside of our brains, linking us to electronic devices with speed and efficiency. A significant upgrade on the machine parts — electronic pacemakers and titanium joints — in today’s humans or, as some have called them, ‘primitive cyborgs’.

Apocalyptic variations on the future of AI often focus on military and defence applications, with the concept of ‘Fully-Autonomous Weapons’ particularly controversial.[7] A weapon system that could search for, identify, select and destroy a target based on algorithms and learning from past security threats — with no real-time human input whatsoever — is a pretty terrifying concept. These visions of an AI-dominated human future approximate a sci-fi dystopia reminiscent of The Terminator.

Accidental discrimination

The destruction of humankind may be some way off, but the warning signs today around ethics in AI are already ringing alarm bells. Just in the past month, machine-learning algorithms have taken flak for pro-actively suggesting bomb-making components to Amazon shoppers,[8]perpetuating gender inequalities in employment advertising, and spreading hate messages through social media.[9] Much of this misfiring is due to the quality and nature of data which the machines use to learn. Fed biased data by humans, they will reach flawed conclusions.[10]

Such outcomes today beg serious questions about ethics for governance of algorithms and broader AI mechanisms in daily life. Recently, a young American man with a history of mental health difficulties was turned down for a job based on the filtering of his responses to a personality questionnaire by an algorithm. He believed he had been unfairly — and illegally — discriminated against, but because the company did not understand how the algorithm worked, and employment law does not currently cover machine decision-making with any clarity, he had no recourse to appeal.[11] Similar concerns have been voiced over China’s algorithm-led ‘social credit’ scheme whose pilot last year gathered data from social media (including friends’ posts) to rate the quality of a person’s ‘citizenship’, before applying this to decisions such as whether to give that person a loan.[12]

Need for AI ethics and laws

Clear systems of ethics for AI operation and regulation are needed, particularly when government and corporate use prioritises factors like acquisition and maintenance of power, or financial profit, in the over-arching goals driving algorithms. Israeli historian Yuval Harari has discussed this with respect to driverless cars and a new AI version of philosophy’s Trolley Problem.[13] Innovations like MIT’s Moral Machine attempt to gather data on human input to machine ethics.[14]

Thinking (and feeling) more broadly

But ethics isn’t the only domain where questions around AI and human wellbeing have been raised. AI is already creating significant emotional impact on humans. Despite this, emotion has been largely neglected as a topic of research in AI. A casual look at the Web of Science academic database throws up 3,542 peer-reviewed articles on AI in the past two years. Only 43 of them — a mere 1.2% — contain the word emotion. Even fewer actually describe research on emotion in AI. Neuroscientist Luiz Pessoa has argued that emotion should be a prerequisite of cognitive architecture in intelligence machines.[15] Yet 99% of AI research seems to disagree.

AI knows how we’re feeling

When we talk about emotion in AI, we are referring to several different things. One area is the ability of machines to recognise our emotional states and act accordingly. This field of ‘affective computing’ is developing quickly through sophisticated biometric sensors capable of measuring our galvanic skin response, brain waves, facial expressions and other sources of emotional data.[16] Most of the time now, they get it right.

Applications of this tech range from cuddly to downright sinister. Companies can get feedback on your emotional response to a film, and try to sell you something linked to it in real-time through your smartphone. Politicians might craft messages guaranteed to appeal emotionally to a specific audience. Less cynically, a social robot might tailor its response to better assist a vulnerable human in a medical or care setting.[17] Or an AI-based digital assistant might choose a song to help lift your mood. Market forces will propel this field, widen its reach, and refine its abilities.

But how do we feel about AI?

A second area of emotion in AI — and one where less in known — is human emotional response to AI. Humans seem to relate to AI as we do with most technology, attributing personalities to inanimate objects, imbuing appliances with intentionality, and generally projecting emotions onto the tech we use (“It’s pissed off with me, that’s why it’s not working”).[18]

This is known as the Media Equation,[19] and involves a form of doublethink: we understand cognitively that the machines are not sentient creatures, but we respond to them emotionally as if they are. This may stem from our fundamental human needs to relate socially and bond emotionally, without which we become depressed. We are driven to relate to people, animals and, it turns out, even machines. Sensory experience is a huge part of this bonding drive and its reward mechanism, as well as a source of pleasure in its own right.

Fake socialising

When the experiences of bonding and belonging are absent in our environments, we are motivated to reproduce them through TV, film, music, books, video games and anything that can provide an immersive social world. This is known as the Social Surrogacy Hypothesis [20] — an empirically-backed theory from social psychology — and it is starting to be applied to AI.

Basic human emotions are in evidence even with disembodied AI: happiness at a spontaneous compliment by a digital assistant, anger at the algorithm that rejected your mortgage application, fear at the prospect of being carried in a driverless car, sadness at Twitter’s AI-based refusal to verify your account (I’m still nursing a bruised ego from that…).

We are the robots

Emotional reactions are stronger with ‘embodied’ AI, which often means robots. And the more a robot resembles a human, the stronger our emotional response to it. We feel drawn to bond with anthropomorphic robots, express positive emotion towards them, and empathise and feel bad when we see them harmed.[21] We even feel sad if they reject us.[22]

Interestingly though, if a robot is almost completely humanlike — but not perfectly human — our evaluation suddenly drops and we reject them. This is known as the ‘uncanny valley’ theory [23] and may have to do with the negative feeling of being deceived more than we would be by a clunky metal robot resembling C-3PO from Star Wars.

A soft touch

AI is now utilising haptic technologies — touch-based experience — to further the emotional bonds between humans and robots. Perhaps the most famous example — Paro the fluffy seal — has been found to be beneficial for a range of groups in care settings, in different countries.[24–25]

Social and emotional robots have a number of potential applications with care for the elderly to promote autonomous living, people experiencing isolation or requiring assistance for dementia, autism or disabilities. Touch-based sensory experience, which is increasingly being integrated into immersive technologies like Virtual Reality, is a part of this.

In other domains, AI may take over routine domestic chores or tasks like teaching. A survey of over 750 South Korean children age 5–18 found that while most of them had no problem accepting taught courses from an AI-robot in school, many had concerns about the emotional role of the AI teacher. Would be able to offer counselling, or relate emotionally to the pupil? [26] Nevertheless, over 40% were in favour of replacing human teachers with AI robots in the classroom.

Anything we’re missing?

Harvard psychologist Steven Pinker has argued that synthesised experiences such as those of social surrogacy described above allow us to deceive ourselves.[27] We are not having the experience itself, but we trick our brains into believing that we are, so that we feel better. However, the facsimile is not as good as the real thing.

Clearly people can experience genuine emotions from interactions with AI. But would we be missing something in a not-too-far-off world populated by driverless cars, disembodied assistants, robotic teachers, cleaners and playmates?

The scenario is reminiscent of Harry Harlow’s famous experiments, where orphaned monkeys reliably chose a tactile ‘mother’ with soft fur and no milk over a cold wire-mesh ‘mother’ that dispensed milk.[28] Could we be setting ourselves up with everything we could want technologically, only to realise that fundamental human needs of bonding and pleasures of real-world sensory experience are absent? Will ‘luxury’ in the future be the social equivalent of artisanal produce compared to mass-produced junk food: authentic sensory experiences and contact with real people rather than robots?

The answer is that right now, we don’t know. But the fact that 99% of AI research is not paying attention to emotion suggests that if emotion does play a greater role in AI, it’ll either be as an afterthought, or because emotional data enables more power and money to be generated by the AI-operated device and its employers.

A digital humanist agenda [29] might help us to remember that, as we hurtle towards singularity and convergence with computers in and around our bodies, we shouldn’t forget to nurture our ancient mammal brains and their need for emotional bonds. Elon Musk’s OpenAI project is a step towards this, aiming to make the benefits of AI available to all.

Let’s take it a step further and consider emotional wellbeing in AI too. Who knows where that might take us?

[1] Aaronovitch, D. (2017) The AI Revolution. The Briefing Room, BBC Radio 4, broadcast August 24, 2017. Accessed September 22, 2017 from: http://www.bbc.co.uk/programmes/b091wb34

[2] Stokel-Walker, C. (2017). Novels, pop songs and artwork: AI is taking on culture. Wired, January 17. Retrieved September 23, 2017 from: http://www.wired.co.uk/article/trend-decoder-ai-generated-artworks

[3] Musk, E. (2017). The future we’re building — and boring. TED Talk, 3 May. Accessed September 22, 2017 at: https://www.youtube.com/watch?v=zIwLWfaAg-8

[4] Leite, I., Martinho, C., & Paiva, A. (2013). Social robots for long-term interaction: a survey. International Journal of Social Robotics, 5(2), 291–308.

[5]Newton-Rex, E. (2017). The State of AI. On Coding blog series, February 18. Retrieved September 22, 2017 from: https://medium.com/on-coding/the-state-of-ai-9aae385c2038

[6] Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. Advances in Computers, 6, 31–88.

[7] Asaro, P. (2012). On banning autonomous weapon systems: human rights, automation, and the dehumanisation of lethal decision-making. International Review of the Red Cross, 94, 687–709.

[8] Kennedy, S. (2017). Potentially deadly bomb ingredients are ‘frequently bought’ together on Amazon. Channel 4 News, September 18. Retrieved September 22, 2017 from: https://www.channel4.com/news/potentially-deadly-bomb-ingredients-on-amazon

[9] Aaronovitch (2017), ibid.

[10] Crawford, K. (2016). Artificial Intelligence’s White Guy problem. New York Times, June 25. Retrieved September 23, 2017 from: https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html?mcubz=3

[11] O’Neil, C. (2016). How algorithms rule our working lives. The Guardian, September 1. Retrieved September 24, 2017 from: https://www.theguardian.com/science/2016/sep/01/how-algorithms-rule-our-working-lives

[12] Hornby, L. (2017). China changes tack on ‘social credit’ scheme plan. Financial Times. Retrieved September 23 from https://www.ft.com/content/f772a9ce-60c4-11e7-91a7-502f7ee26895

[13] Harari, Y. N. (2016). Homo Deus: A Brief History of Tomorrow. London: Harvill Secker.

[14] http://moralmachine.mit.edu/

[15] Pessoa, L. (2017). Do intelligent robots need emotion? Trends in Cognitive Sciences, e-pub ahead of print.

[16] Calvo, R. A., D’Mello, S., Gratch, J., & Kappas, A. (2015). Introduction of Affective Computing. In R. A. Calvo, S. D’Mello, J. Gratch, & A. Kappas (Eds.) The Oxford Handbook of Affective Computing (pp. 1–10). Oxford: Oxford University Press.

[17] Kolling, T., Baisch, S., Schall, A., Selic, S., Rühl, S., Kim, Z. et al. (2016). What is emotional about emotional robotics? In S. Y. Tettegah & Y. E. Garcia (Eds.) Emotions, Technology and Health (pp. 85–104). London: Academic Press.

[18] Rosenthal-von der Pütten, A. M., Krämer, N. C., Hoffman, L., Sobieraj, S., & Eimler, S. C. (2013). An experimental study on emotional reactions towards a robot. International Journal of Social Robotics, 5: 17–34.

[19] Reeves, B., & Nass, C. (1996). The Media Equation: How People Treat Computers, Television and New Media Like Real People and Places. Cambridge: Cambridge University Press.

[20] Derrick, J. L., Gabriel, S., & Hugenberg, K. (2009). Social surrogacy: How favoured television programs provide the experience of belonging. Journal of Experimental Social Psychology, 45, 352–362.

[21] Hoenen, M., Lübke, K. T., Pause, B. M. (2016). Non-anthropomorphic robots as social entities on a neurophysiological level. Computers in Human Behavior, 57, 182–186.

[22] Nash, K., Lea, J. M., Davies, T., & Yogeeswaran, K. (2017). The Bionic Blues: Robot rejection lowers self-esteem. Computers in Human Behavior, e-pub ahead of print.

[23] Mori, M. (1970). The Uncanny Valley. Energy, 7(4), 33–35.

[24] Marti, P., Bacigalupo, M., Giusti, L., Mennecozzi, C., & Shibata, T. (2006). Socially-assistive robotics in the treatment of behavioural and psychological symptoms of dementia. Biomedical Robotics and Biomechatronics, IEEE conference abstract.

[25] Sung, H.-C., Chang, S.-M., Chin, M.-Y., & Lee, W.-L. (2015). Robot-assisted therapy for improving social interactions and activity participation among institutionalised older adults: A pilot study. Asia-Pacific Psychiatry, 7(1), 1–6.

[26] Shin, N. (2017). Students’ perceptions of Artificial Intelligence Technology and Artificial Intelligence Teachers. The Journal of Korean Teacher Education, 34(2), 169–192.

[27] Pinker, S. (1997). How The Mind Works. New York, Norton & Company.

[28] Harlow, H. (1958). The nature of love. American Psychologist, 13, 573–685.

[29] Pettey, C. (2015). Embracing Digital Humanism. Gartner Inc., June 5. Retrieved June 22, 2016 from: http://www.gartner.com/smarterwithgartner/embracing-digital-humanism/

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.