AI: Designing ‘Artificial Emotions’ into ‘Artificial Intelligence’

Michael Parekh
6 min readJan 8, 2024

--

… a slippery slope coloring AI UI/UX to come

The Bigger Picture, Sunday, 1/7/24

I’ve written often in these pages that the wide range of AI ‘Doomer’, existential fears do not resonate with me in this AI Tech Wave. For the most part, it’s due to a strong view, relatively weakly held, that the underlying AI technologies, LLM AI models and all, are nowhere close to the feared ‘AGI/Super Intelligence’ capabilities that have permeated ahead of reality into the minds of users, AI tech executives, and regulators. This despite the exponential improvements expected over the next few years in the underlying capabilities of Foundation AI models, large and small.

But there is one fear that I do have about AI that has not gotten as much attention out there as it likely should have. And that’s the increasing risk of interacting AI as it were human. I even wrote a post last fall titled “Don’t Anthropomorphize the AIs” on this issue.

As AIs like ChatGPT and others go multimodal this year, with voice, video and other capabillities, this is a growing issue. Both in terms of some negatives and more positives over time. As outlined earlier, our best technologists are still trying to figure out how LLM AI and the underlying GPU hardware fundamentally do their thing, and how to fuse these AI technologies with traditional software technologies.

Especially as AI gets baked into ‘Smart Agents & Companions’, and ‘Smart voice’ services used by hundreds of millions like Apple Siri, Amazon Echo, Google Nest and others. This is ‘The Big Picture’ issue I’d like to focus on this Sunday. Let me explain.

This issue of the AI user interaction ‘mannerisms’, is increasingly important as hundreds of millions of users are being trained in ’multimodal AI prompting’. Both text and AI voice prompts now increasingly require the AIs being ‘asked nicely’.

Tech professionals are already warning that this trend is acclerating. As Stephanie Palozzolo of the Information noted earlier this week in “Why Users Have to Compliment ChatGPT To Get the Most Out of It”:

“If you’re not getting what you want from OpenAI’s large language models these days, try paying them a compliment. One example, according to Scale AI field CTO Vijay Karunamurthy, is to say: “you’re a very smart computer that takes a lot of time and thinks step by step to answer questions.” Remarkably, that helps Scale AI’s developers get better responses. I kid you not.”

“To be sure, the need for users to carefully word their prompts has been chronicled extensively. I’ve even heard of developers saying, “this is a life or death situation” in prompts to elicit the best responses from AI models. Some companies like Scale AI have also hired experts in the psychology of LLMs to improve their models’ performance, Karunamurthy said.”

The fixes and remedies thus far are tedious and cumbersome. As Stephanie goes on to describe:

“OpenAI is aware of the issue. During last year’s developer day, the startup dedicated an entire 45-minute session to how developers could improve an LLM’s performance by giving it external information to reference (otherwise known as retrieval augmented generation) or examples of how to respond (otherwise known as finetuning). I was surprised to see how much of the process was trial-and-error, and how often even OpenAI’s own researchers ran into issues.”

“And in a guide for users on how to best structure model prompts, OpenAI provided a number of surprisingly tedious strategies, like having to instruct models to act like a “helpful tutor” or remind them to double check they didn’t miss anything in previous responses.”

“Companies have already started building solutions to some of these problems; for example, OpenAI’s “custom instructions” mean that users only have to describe their desired ChatGPT persona — like an elementary school science teacher — once.”

If you’ve noticed, most multimodal AIs like ChatGPT already answer every prompt query with some version of a chirpy: “Certainly!”, before proceeding to give answers. Some of this is of course necessary, to help human continue their interaction with an artificial construct using the same manners that we’re taught early on, to respond to other humans. We’ve all had the urge to say ‘Thank you’ to Siri, Google Assistant and other ‘Smart’ Voice Assistants over the years.

So some of this is just programmed politeness, which is fine. It’s when the actual responses depend on HOW we talk with AIs, it potentially becomes problematic.

We’re conditioning not just humans on how to interact with computers with ‘artificial politeness’, but ‘training’ the LLM AIs on ‘artificial emotions’ as well.

Imagine if the ‘Computer’ in Star Trek, was programmed to behave that way. There were some iconic funny moments when Star Trek characters tried to talk to computers nicely.

Note that the Star Trek computer in the original 1960s series and later, inspired generations of technology industry members, including the founders of Google and beyond.

Certainly other science fiction stories like Hal in 2001, A Space Odyssey, of “I can’t do that, Dave” fame, was programmed to get to ‘his’ ultimate dramatically disastrous behavior:

These and other science fiction ‘AI’ computers have inspired both the optimism and pessimism in tech industry founders and engineers, as I’ve noted before.

So it’s especially important that as we go from a hundred million using OpenAI’s ChatGPT monthly to billions using AI products and services via so many providers soon, we go in with eyes open on AI and emotions.

At the very least we might want to give users options on how we set the AI to interact with us, much as we already do with car driving systems. Most cars today come with various ‘modes’ like Eco, Normal, Sport. Elon adds ‘Ludicrous Mode’ into his Teslas.

Perhaps we could offer AI prompting and interaction conversational modes like ‘Terse, Tempered, Talkative’, with ‘Treacly’ as a last resort. At least give options to set the ‘artificial emotions’ in the AI interactions.

As mentioned earlier, we’re conditioning not just humans on how to interact with computers with ‘artificial politeness’, but ‘training’ the LLM AIs on ‘artificial emotions’ as well.

If we’re already worried about ‘artificial intelligence’, then these other ‘artificial’ artifices should be used with care as well. And use them thoughtfully going forward. We should be mindful not to color our AIs on how to deal with us and how we deal with them.

We need to make sure we don’t make them and us get too used to emotions in the way we design the user interfaces and user experiences (UI/UX) in these early days.

Especially while we get AI to help us do things differently and better.

Good habits start early as they say, especially given the long AI road ahead. Stay tuned.

(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)

(You can also subscribe to my Newsletter “AI: A Reset to Zero” for free on Substack for more content like this.)

--

--

Michael Parekh

Investor in tech-driven business resets. Founded Goldman Sachs Internet Research franchise in 1994. https://twitter.com/MParekh