Sitemap
LatinXinAI

Global Community of LatinX in AI (LXAI) Professionals, Engineers, Researchers, and Entrepreneurs! Join us at www.latinxinai.org

Should your AI look human?

--

A few months ago, a performance management company made quite the headlines when it announced it was deploying AI employees and giving them onboarding and performance goals. The so-called ‘employees’ had names and faces. They had managers, and a place in the company’s org chart. All this while the same company held layoffs with real employees only a few months before. The backlash was instantaneous with many pointing out the tone-deafness and perceived lack of respect for their own human workers in this project. The company ultimately decided to roll back the idea of AI employees entirely.

A prototype shown by Lattice, a performance management company, on how human and AI employees could work side by side. Would their AI employees be competing for promotions and raises too?

Anthropomorphizing AI

It’s easy to anthropomorphize AI; that is, to treat it or cast it like a human being. A few months ago, I posted a video about my conversation with the GPT-4o interface right when voice prompts came out. I was impressed with the realism of the audio and the smoothness of response. Clearly, as AI processes more data, it types and sounds more like us, and this can genuinely create a joyful experience for the user. Yes, I really enjoyed talking to the interface about my trip to Japan. But others have gone even further, using realistic AI interfaces as their therapist, or as a friend.

I did have a ton of fun talking to GPT-4o when it came out. Of course, there have been some reports of creepy behaviors with the latest updates, such as GPT-4o cloning the user’s voice and randomly saying no.

But, when it comes to productizing AI — and especially in a B2B context — there are two big factors that ought to be considered: the anxious feelings about AI automation in the workplace, and the uncanny valley effect that still prevails in the fundamental way humans interact with human-looking machines — or AI for that matter. These are very real phenomena and they may be currently converging before our very own eyes. Let’s go over some research and empirical findings on these.

AI Anxiety in the Workplace

AI anxiety is real. Some refer to it as FOBO — the ‘fear of being replaced’. Psychologically, AI anxiety is not a diagnosis in itself, but there are already studies showing statistical relationships about workers who report fears over AI taking their jobs and an associated loss of sleep and generalized anxiety symptoms. Earlier this year, the American Psychological Association (APA) conducted a survey and found that nearly 40% of workers had worries or anxiety about AI taking their jobs. Ernst & Young (EY), a consulting firm, also conducted a survey with US companies and found that 71% of employees had some sort of concern about AI. It’s still early to know whether this wave of ‘FOBO’ is entirely justified. But it is also true that many tech companies pushing their AI solutions have also held significant layoffs in the last two years.

Many visualizations would indicate 2024 is not as bad as 2023, but the constant buzz about AI, and the increasing mention of ‘restructuring’ in some company communications can keep many up at night. In the USA, it is also the case interest rates remain high and many organizations have been freezing their hiring.

I am not yet aware of substantial research on how workers generally perceive human-looking AI assistants — although surely those studies are coming. But it’s not too hard to imagine a certain degree of animosity and competition towards something when you believe it might be taking your job away, and when jobs themselves may become more scarce. There is plenty of organizational psychology research on the negative consequences of workplace envy, employee competitiveness, animosity and toxic workplace environments, and perceptions of scarcity.

While it might be too soon to empirically evaluate the impact of human-like AI avatars on workers, we do have some early evidence of negative feelings from users in general about them. Meta tried out AI bots with celebrity voices, and they already rolled them back, citing focus on other areas of AI instead. Samsung recently tried out an anthropomorphic AI avatar that was quickly rolled back allegedly due to inappropriate internet responses as well as mixed feelings about the ‘cringe’ aspect of the character.

It’s hard to know how much of the negativity about some human-looking AI is truly directed at the fact the AI is starting to behave more like a human, or whether consumers are just starting to get some AI fatigue in general. But there is a strong argument to be made that we, as humans, have a pre-disposition to find realistic looking AI unsettling. And this is where we turn to this idea of the ‘uncanny valley’.

The uncanny valley problem

The uncanny valley, proposed in 1970 by Masahiro Mori, states that as a robot’s appearance becomes more human-like, it becomes more appealing to humans, but only up to a point. When the resemblance becomes very close, slight imperfections cause discomfort or eeriness. This article by Forbes goes into quite some detail about the uncanny valley effect — with some uneasy pictures that I don’t want to replicate here. For those who prefer a statistical visualization, I found this to be an adequate representation of the valley: slowly, as we became better at making robots realistic, we started to find them more likable, but there comes a point where suddenly they seem to fall just short enough of actually looking like a human being for our brains to find it unsettling.

To be fair, we may soon be over this kind of visual uncanny valley with how good AI image generation has become. If my Facebook feed is any proof, we are already surrounded by AI images that many cannot distinguish from reality. But the uncanny valley is not just a visual one — the way an AI sounds (even if it sounds like our own voice), and how a machine moves also inform our attitudes towards them.

And then there’s an entire new section of the curve that we haven’t yet gotten to explore, which is the descending part between the top of the valley and the real humans — when the images, sounds, and behaviors all approach humans a little too closely. Many have been thinking of the implications we could have when it becomes easier to purchase an AI romantic partner than to find a real one, or when the kind of jealousy and desire humans sometimes display to one another get transferred over to an AI product they are forced to use.

What should AI tools look like, then?

But not all is grim. There are ways to make AI interfaces empathetic and enjoyable whilst avoiding automation anxiety and the uncanny valley. In 2022, a robotics team at MIT ran a fascinating experiment with dinosaur and animal-looking robots. They concluded that if a robot behaves more like an animal, such as a dog or even a dinosaur, humans have negative emotional reactions to it being kicked or pushed. Sometimes, the negative response was even stronger than seeing a fellow human getting the same treatment.

Poor dog!!!

The popular science YouTube channel Veritasium has an hour-long video showcasing different robots and AIs and based on the interviews, many experts seem to agree making robots look like humans might not necessarily be the way to go for all use cases. For one, it physically limits robots from having the shape and functionality that can allow them to be specialized. But also due to the confusing emotional effects they can have on people, in part due to the uncanny valley effect we just discussed above.

So what should an AI tool, particularly in a B2B context, look like then? Perhaps it could look like an animal avatar. Or a pleasant robot like the one in Big Hero 6. But, just as well, perhaps it does not need to look like anything. Instead, it can act like a tool that empowers and highlights human labor — diminishing both workplace anxiety and the uncanny valley effect all at once.

One of the world’s most renowned consulting firms, McKinsey & Co, is pushing the idea of ‘hybrid intelligence’ — a term that still emphasizes the uniqueness and power of human intelligence, but invites AI augmentation front and center. If we are to follow this notion, the question of what ‘AI should look like’ becomes far less important than the question of ‘what should the AI do to make your workers better?’ Just as we have solid evidence about the fear and anxiety of AI in the workplace, we also have solid evidence that employees can be excited about AI in the right context, particularly if they feel like there is a way for them to learn new skills and remain competitive.

Conclusion

Let’s go back to the original example in this article where Lattice unveiled AI employees. It is not too hard to imagine a product rollout that would have gone far better for them. What if instead of calling ‘AI Employees’, they had simply referred to them as helper bots created to ‘serve’ and ‘empower’ their employees? What if they had pitched the helper bots as tools that removed the tasks from their employees’ days that they least liked so they could focus on what they did best? Same product, no anthropomorphism. Probably a lot less anxiety. Probably no rolling back.

In the next few years, it will be tempting to anthropomorphize AI products. The quality of image and video generation will help overcome the uncanny valley. But the fear of replacement, the gaps in AI education, and the perception of job scarcity will continue to make it harder for AI products in the B2B setting to muster the same kind of enthusiasm they might for more niche applications like using an AI chatbot as a therapist when the waitlist to see a real one exceeds months. My hope is that companies innovating in this space will take the existing and upcoming psychology research on this matter seriously, and of course, that they will listen to their employees when making decisions about when and how to roll out AI solutions into their day-to-day lives.

LatinX in AI (LXAI) logo

Do you identify as Latinx and are working in artificial intelligence or know someone who is Latinx and is working in artificial intelligence?

Don’t forget to hit the 👏 below to help support our community — it means a lot!

--

--

LatinXinAI
LatinXinAI

Published in LatinXinAI

Global Community of LatinX in AI (LXAI) Professionals, Engineers, Researchers, and Entrepreneurs! Join us at www.latinxinai.org

No responses yet