Member-only story

Mind Your AI Manners: The Bottom-Line Impact of Cultural Intelligence

5 min readDec 9, 2024
Without adding cultural intelligence to AI, you’ll get a mechanical parrot. Photo by author David E. Sweenor

Remember when HAL 9000 ominously said, “I’m sorry Dave, I’m afraid I can’t do that”? While our current incarnation of AI won’t lock us out of the spaceship, new research reveals that large language models (LLMs) respond differently based on the politeness of our prompts. Just as HAL’s careful courtesy masked its refusal to comply, the way we phrase our requests to AI impacts their performance. So, next time you fire up your favorite generative AI app, don’t forget to say “please” and “thank you,” just like your parents taught you.

Research from Waseda University and RIKEN shows that politeness levels in prompts directly influence the quality of AI responses.[1] By studying interactions across English, Chinese, and Japanese language models, the researchers found that AI systems reflect culturally specific patterns. For instance, overly polite or harsh prompts can degrade performance, and the optimal tone varies across languages.

For technology leaders scaling AI operations globally, deploying LLMs in different languages is not a simple copy-paste exercise. Models trained for one cultural context may perform poorly in another, requiring careful adaptation to local norms. This challenge should be a key consideration for CIOs, CTOs, and Chief AI Officers (CAIOs) planning international AI rollouts.

--

--

David Sweenor
David Sweenor

Written by David Sweenor

David Sweenor, founder of TinyTechGuides is an international speaker, and acclaimed author with several patents. He is a specialist in AI, ML, and data science.

No responses yet