Emotional AI, anyone? Human-first, please!

Lizzie Hughes
surveillance and society
3 min readAug 24, 2023

In this post, Vian Bakir, Alexander Laffer, and Andrew McStay reflect on their article ‘Human-First, Please: Assessing Citizen Views and Industrial Ambition for Emotional AI in Recommender Systems’, which appeared in the 21(2) issue of Surveillance & Society.

Image used with permission from authors’ design fiction narrative.

If your digital devices and services could offer you more personalised experiences by reading your emotions, would you want them to? Our study, forming the subject of our recent paper in Surveillance and Society, finds only limited interest in this, and with strong caveats.

Emotional AI” is AI that reads, reacts and simulates understanding of human emotions through text, images, voice, computer vision and biometric sensing. Questionable on grounds of morality, social safety and accuracy, it’s being developed and deployed in consumer-facing sectors worldwide. For instance, patents from Amazon (the world’s largest online marketplace) and Spotify (the world’s largest music streaming service provider) envisage use of biometric-based emotional AI in their recommendation engines to offer users highly tailored services, ads and products from their platforms.

The European Union has been developing a world-leading AI Act to encourage innovation and trust in AI applications. When writing our paper, the draft Act largely viewed emotion recognition systems (such as those envisaged by Amazon and Spotify) as of limited risk (if not used in settings like education, employment, justice, law or immigration). As such, it was set to impose transparency obligations on such systems: users would have to be told that they’re in use, to allow informed choices on engaging with the system.

However, a more recent version of the draft Act approved by the European Parliament in June 2023 goes further. Inspired by critique of facial coding applications and their fundamental unreliability (due partly to ambiguity about the nature of emotion, and whether it can be “read”), it now regards much “emotion recognition” as “high risk”. This means that AI systems (such as those envisaged by Amazon and Spotify) will have to undergo an EU conformity assessment process, to ensure that products are safe and respect citizens’ fundamental rights.

With EU lawmakers fluctuating on how risky they perceive emotional AI to be, it’s useful to understand what the public think. With few qualitative user-based studies on emotional AI, our UK-based study is important in presenting diverse people’s views (46 older, younger, ethnic minority or disabled participants) on the prospect of emotional AI in recommender systems. Ascertaining lay attitudes towards emerging technologies is hard given the technologies’ complexity, and difficulty of situating abstract propositions (like emotion profiling) in everyday life. To overcome these obstacles, we developed, and deployed in focus groups, an interactive narrative method based on design fiction principles. This set emotional AI use cases (including the Amazon and Spotify patents) within a fictional narrative to explore participants’ views on the impact of technology on people’s lives, social institutions and norms.

Eliciting rich discussion among participants, we find some positive views on the technology’s usefulness. But our most prominent themes are negative, including desire for a human-first environment that flags a generalised sense of alienation, as well as uneasy terms of engagement and resignation to these systems.

We recommend that platforms using biometric-based emotional AI in recommender systems should pay attention to our participants’ desire for the products to be genuinely useful. We also recommend that giving users a genuine choice on whether or not to use emotional AI systems is paramount, given the negative themes expressed by our participants.

--

--

Lizzie Hughes
surveillance and society

Associate Member Representative, Surveillance Studies Network